So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
Timnit Gebru is launching Distributed Artificial Intelligence Research Institute (DAIR) to document AI’s harms on marginalized groups.
By Nitasha Tiku for Washington Post on December 2, 2021
When you or I seek out evidence to back up our existing beliefs and ignore the evidence that shows we’re wrong, it’s called “confirmation bias.” It’s a well-understood phenomenon that none of us are immune to, and thoughtful people put a lot of effort into countering it in themselves.
By Cory Doctorow for Pluralistic on December 2, 2021
Today, 30 November 2021, European Digital Rights (EDRi) and 114 civil society organisations launched a collective statement to call for an Artificial Intelligence Act (AIA) which foregrounds fundamental rights.
From European Digital Rights (EDRi) on November 30, 2021
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.
Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
Continue reading “Racist Technology in Action: an AI for ethical advice turns out to be super racist”Despite Biden’s announced commitment to advancing racial justice, not a single appointee to the task force has focused experience on civil rights and liberties in the development and use of AI. That has to change. Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.
By ReNika Moore for Washington Post on August 9, 2021
Voyager, which pitches its tech to police, has suggested indicators such as Instagram usernames that show Arab pride can signal inclination towards extremism.
By Johana Bhuiyan and Sam Levin for The Guardian on November 17, 2021
How big data and machine learning encode discrimination and create agitated clusters of comforting rage.
By Wendy Hui Kyong Chun for The MIT Press on November 1, 2021
Researchers at the Allen Institute for AI created Ask Delphi to make ethical judgments — but it turned out to be awfully bigoted and racist instead.
By Tony Tran for Futurism on October 22, 2021
Health secretary signs up to hi-tech schemes countering health disparities and reflecting minority ethnic groups’ data.
By Andrew Gregory for The Guardian on October 20, 2021
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.
Continue reading “Big Tech is propped up by a globally exploited workforce”The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
The Biden administration must prioritize and address all the ways that AI and technology can exacerbate racial and other inequities.
By Olga Akselrod for American Civil Liberties Union (ACLU) on July 13, 2021
Facebook called it “an unacceptable error.” The company has struggled with other issues related to race.
By Ryan Mac for The New York Times on September 3, 2021
Mijn achternaam draag ik met trots: Ibrahim is de voornaam van mijn overgrootopa, die mijn vader invulde toen hij in de jaren ’70 vanuit Egypte naar Nederland kwam. Mijn vader is helaas overleden, het was de meest warme en hartelijke man die je je kan inbeelden. Maar er is iets geks aan de hand met mijn achternaam: als een computer een taal leert met behulp van alledaagse teksten van internet, blijkt dat de computer niet-westerse namen zoals Ibrahim als minder ‘plezierig’ aanduidt dan westerse achternamen (Caliskan et al., 2017).
By Meldrid Ibrahim for Mister Motley on August 24, 2021
A preprint study shows ride-hailing services like Uber, Lyft, and Via charge higher prices in certain neighborhoods based on racial and other biases.
By Kyle Wiggers for VentureBeat on June 12, 2020
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
From YouTube on March 31, 2021
Working to break down the barriers keeping young Black people from careers in AI.
By Abby Ohlheiser for MIT Technology Review on June 30, 2021
She employs AI to get to the roots of health disparities across race, gender, and class.
By Neel V. Patel for MIT Technology Review on June 30, 2021
In many discussions and policy proposals related to addressing and fixing the harms of AI and algorithmic decision-making, much attention and hope has been placed on human oversight as a solution. This article by Ben Green and Amba Kak urges us to question the limits of human oversight, rather than seeing it as a magic bullet. For example, calling for ‘meaningful’ oversight sounds better in theory than practice. Humans can also be prone to automation bias, struggle with evaluating and making decisions based on the results of the algorithm, or exhibit racial biases in response to algorithms. Consequentially, these effects can have racist outcomes. This has already been proven in areas such as policing and housing.
Continue reading “Human-in-the-loop is not the magic bullet to fix AI harms”Humans are being tasked with overseeing algorithms that were put in place with the promise of augmenting human deficiencies.
By Amba Kak and Ben Green for Slate Magazine on June 15, 2021
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
In a recent interview with The Guardian, Kate Crawford discusses her new book, Atlas AI, that delves into the broader landscape of how AI systems work by canvassing the structures of production and material realities. One example is ImageNet, a massive training dataset created by researchers from Stanford, that is used to test whether object recognition algorithms are efficient. It was made by scraping photos and images across the web and hiring crowd workers to label them according to an outdated lexical database created in the 1980s.
Continue reading “AI and its hidden costs”In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”Successful and ethical artificial intelligence programs take into account behind-the-scenes ‘repair work’ and ‘ghost workers.’
By Sara Brown for MIT Sloan on May 4, 2021
The AI researcher on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms.
By Kate Crawford for The Guardian on June 6, 2021
Women and people of colour are underrepresented and depicted with stereotypes.
From The Economist on June 5, 2021
Computer programs used to predict recidivism and determine prison terms have a high error rate, a secret design, and a demonstrable racial bias.
By Jed S. Rakoff for The New York Review of Books on June 10, 2021
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more.
From Federal Trade Commission on April 19, 2021
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.