For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
Ein Fotoautomat des Hamburger Landesbetriebs Verkehr erkennt offenbar keine Schwarzen. Eine Hamburgerin konnte darum im Dezember keinen internationalen Führerschein beantragen.
From NDR.de on July 25, 2020
Experience the world of face detection algorithms in this freaky test.
By Tijmen Schep for How Normal Am I
Twitter just made a change to the way it displays images that has visual artists on the social network celebrating.
By Taylor Hatmaker for TechCrunch on May 6, 2021
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”A student researcher has reverse-engineered the controversial exam software—and discovered a tool infamous for failing to recognize non-white faces.
By Todd Feathers for VICE on April 8, 2021
Facial recognition technology (FRT) has been widely studied and criticized for its racialising impacts and its role in the overpolicing of minoritised communities. However, a key aspect of facial recognition technologies is the dataset of faces used for training and testing. In this article, we situate FRT as an infrastructural assemblage and focus on the history of four facial recognition datasets: the original dataset created by W.W. Bledsoe and his team at the Panoramic Research Institute in 1963; the FERET dataset collected by the Army Research Laboratory in 1995; MEDS-I (2009) and MEDS-II (2011), the datasets containing dead arrestees, curated by the MITRE Corporation; and the Diversity in Faces dataset, created in 2019 by IBM. Through these four exemplary datasets, we suggest that the politics of race in facial recognition are about far more than simply representation, raising questions about the potential side-effects and limitations of efforts to simply ‘de-bias’ data.
By Nikki Stevens and Os Keyes for Taylor & Francis Online on March 26, 2021
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.
Continue reading “Online proctoring excludes and discriminates”Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
A growing industry wants to scrutinize the algorithms that govern our lives—but it needs teeth.
By Alfred Ng for The Markup on February 23, 2021
Cheating is not a technological problem, but a social and pedagogical problem. Technology is often blamed for creating the conditions in which cheating proliferates and is then offered as the solution to the problem it created; both claims are false.
By Shea Swauger for Hybrid Pedagogy on April 2, 2020
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
Face recognition is a threat to not only our constitutional rights, but to people of color and other marginalized groups who are more likely to be misidentified — and bear the consequences.
By Kate Ruane for American Civil Liberties Union (ACLU) on February 17, 2021
Wat kunnen universiteiten leren van Antoine Griezmann, schaduwspits van FC Barcelona? Mensenrechten zowel in woord als in daad nastreven, vinden Joshua B. Cohen en Assamaual Saidi, en daar is nog een wereld te winnen.
By Assamaual Saidi and Joshua B. Cohen for Het Parool on February 11, 2021
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””A recent, yet already classic, example of racist technology is Twitter’s photo cropping machine learning algorithm. The algorithm was shown to consistently preference white faces in the cropped previews of pictures.
Continue reading “Racist technology in action: Cropping out the non-white”A Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.
By Nicolas Kayser-Bril for AlgorithmWatch on April 7, 2020
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How many more Black men will be wrongfully arrested before this country puts a stop to the unregulated use of facial recognition software?
From Washington Post on December 31, 2020
Yup, still racist.
By Anthony Tordillos for Twitter on December 7, 2020
This episode is part of the GDC Webinar series that took place on september 2020. How do digital technologies mediate racism? It is increasingly clear that digital technologies, including auto-complete function, facial recognition, and profiling tools are not neutral but racialized in specific ways. This webinar focuses on the different modes of programmed racism. We present historical and contemporary examples of racial bias in computational systems and learn about the potential of Civic AI. We discuss the need for a global perspective and postcolonial approaches to computation and discrimination. What research agenda is needed to address current problems and inequalities? Chair: Lonneke van der Velden, University of Amsterdam Speakers: Sennay Ghebreab, Associate Professor of informatics, University of Amsterdam and Scientific Director of the Civic AI Lab, for civic-centered and community minded design, development and development of AI Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology, and Society (TILT), PI of the ERC-funded Global Data Justice Project. Payal Arora, Professor and Chair in Technology, Values, and Global Media Cultures at the Erasmus School of Philosophy, Erasmus University Rotterdam and Author of the ‘Next Billion Users’ with Harvard Press.
From Spotify on November 24, 2020
The people in this story may look familiar, like ones you’ve seen on Facebook or Twitter or Tinder. But they don’t exist. They were born from the mind of a computer, and the technology behind them is improving at a startling pace.
By Kashmir Hill for The New York Times on November 21, 2020
A conversation about how to break cages.
By Sarah T. Hamid for Logic on August 31, 2020
Special rapporteur on racism and xenophobia believes there is a misconception that biosurveillance technology is without bias.
By Katy Fallon for The Guardian on November 11, 2020
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
Users highlight examples of feature automatically focusing on white faces over black ones.
By Alex Hern for The Guardian on September 21, 2020
NIST benchmarks suggest some facial recognition algorithms haven’t corrected historic bias — and are actually getting worse.
By Kyle Wiggers for VentureBeat on September 9, 2020
Surveillance capitalism turns a profit by making people more comfortable with discrimination.
By Chris Gilliard for Real Life on October 15, 2018
On Wednesday, a Facebook employee in Nigeria shared footage of a minor inconvenience that he says speaks to tech’s larger diversity problem. In the video, a white man and a dark-skinned black man both try to get soap from a soap dispenser. The soap dispenses for the white man, but not the darker skinned man. After a bit of laughter, a person can be overheard chucking, “too black!”
By Sidney Fussell for Gizmodo on August 17, 2017
The face tracking feature of the HP web cam will not recognize or track black faces.
From YouTube on December 10, 2009
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.