Copy
Feature The State of Maine has enacted what the American Civil Liberties Union (ACLU) describes as the strongest state facial-recognition law in the US amid growing concern over the unconstrained use of facial-recognition systems by the public and private sector.
The Maine bill, LD 1585 [PDF], forbids state officials from using facial-recognition technology, or entering into agreements with third parties to do so, except under a relatively limited set of circumstances having to do with serious crimes and searches of vehicle registration data. It imposes the sort of broad limitations that civil liberties advocacy groups have been advocating. Maine is showing the rest of the country what it looks like when we the people are in control of our civil rights and civil liberties, not tech companies that stand to profit from widespread government use of face surveillance technology,” said Michael Kebede, policy counsel at the ACLU of Maine, in a statement.
Apr 22 2021, 7:32 PM
April 21 2021, 9:21 PM
April 22 2021, 7:32 PM
(Bloomberg) For more than three years, Google held up its Ethical AI research team as a shining example of a concerted effort to address thorny issues raised by its innovations. Created in 2017, the group assembled researchers from underrepresented communities and varied areas of expertise to examine the moral implications of futuristic technology and illuminate Silicon Valleyâs blind spots. It was led by a pair of star scientists.
(Bloomberg) For more than three years, Google held up its Ethical AI research team as a shining example of a concerted effort to address thorny issues raised by its innovations. Created in 2017, the group assembled researchers from underrepresented communities and varied areas of expertise to examine the moral implications of futuristic technology and illuminate Silicon Valleyâs blind spots. It was led by a pair of star scientists, who burnished Googleâs reputation as
Data strikes
Users install privacy tools or leave platforms so that tech firms such as Google and Facebook are unable to track and store your data.
Data poisoning
Data poisoning
involves manipulating the trained dataset. For example, AdNauseam is a browser extension that clicks on every advertisement that is popped up on your online feed which confuses Google’s ad-targeting algorithms.
Conscious data contribution
To register your protest against a platform you can start uploading content on its competitor’s platform. For example, instead of uploading your photos on Facebook, you can post them on Tumblr instead.
With a collective effort, companies can be forced to change their data collection practices. For example, when WhatsApp announced its new terms of service that would allow Facebook and its subsidiaries to store data, millions of users deleted their accounts and moved to competitors like Signal and Telegram.
A few months later, she landed a job at Google.
There, her mandate was to help lead a new department devoted to the potentially harmful effects of artificial intelligence, a field of research known as ethical AI. Artificial intelligence has become increasingly vital to Google s business, which has around 200 employees working on what it calls responsible AI for its own products.
CEO Sundar Pichai has repeatedly doubled down on AI s importance to Google, once calling it more profound than fire or electricity. The ethical AI team, which currently has around a dozen employees and sits within Google s Research group, was created to study the longer-term risks of artificial intelligence that go beyond Google s own walls, and how to prevent them.
Africa: What Future for Ethical AI After Google Scientist Firing? allafrica.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from allafrica.com Daily Mail and Mail on Sunday newspapers.