Charlton McIlwain, author of the book Black Software, takes a good hard look at IBM in a longread for Logic magazine.
Continue reading “IBM is failing to increase diversity while successfully producing racist information technologies”What Happens When Our Faces Are Tracked Everywhere We Go?
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
Race, tech, and medicine: Remarks from Dr. Dorothy Roberts and Dr. Ruha Benjamin
Race, tech, and medicine: Remarks from Dr. Dorothy Roberts and Dr. Ruha Benjamin.
By Dorothy Roberts, Kim M Reynolds and Ruha Benjamin for Our Data Bodies Project on August 15, 2020
How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.
By Gabriel Geiger for VICE on March 1, 2021
The Fort Rodman Experiment
In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.
By Charlton McIlwain for Logic on December 20, 2021
Technology has codified structural racism – will the EU tackle racist tech?
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
The Dutch government’s love affair with ethnic profiling
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”LAPD Sought Ring Home Security Video Related to Black Lives Matter Protests
Emails show that the LAPD repeatedly asked camera owners for footage during the demonstrations, raising First Amendment concerns.
By Sam Biddle for The Intercept on February 16, 2021
Decode the Default
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
How the LAPD and Palantir Use Data to Justify Racist Policing
In a new book, a sociologist who spent months embedded with the LAPD details how data-driven policing techwashes bias.
By Mara Hvistendahl for The Intercept on January 30, 2021
Racism and “Smart Borders”
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””Hoe Nederland A.I. inzet voor etnisch profileren
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How our data encodes systematic racism
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
Cory Doctorow on Reclaiming Technologies of Oppression
Who holds the power in tech?
By Cory Doctorow for Slate Magazine on October 26, 2019
Community Defense: Sarah T. Hamid on Abolishing Carceral Technologies
A conversation about how to break cages.
By Sarah T. Hamid for Logic on August 31, 2020
Technological Testing Grounds
By Antonella Napolitano, Chris Jones, Kostantinos Kakavoulis and Sarah Chander for European Digital Rights (EDRi) on November 1, 2020
Dataminr Targets Communities of Color for Police
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020
Nederland heeft een algoritmewaakhond nodig
Privacy: Ondanks de toeslagenaffaire blijft de overheid dubieuze algoritmes gebruiken, ziet Dagmar Oudshoorn. Tijd voor een toezichthouder.
By Dagmar Oudshoorn for NRC on October 14, 2020
Asymmetrical Power: The intransparency of the Dutch Police
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
Structural Racism, Digital Rights and Technology
European Digital Rights (EDRi) recommendations to inform the European Commission Action Plan on Structural Racism.
By Petra Molnar and Sarah Chander for European Digital Rights (EDRi) on July 1, 2020
Technology Can’t Predict Crime, It Can Only Weaponize Proximity to Policing
In June 2020, Santa Cruz, California became the first city in the United States to ban municipal use of predictive policing, a method of deploying law enforcement resources according to data-driven analytics that supposedly are able to predict perpetrators, victims, or locations of future crimes. Especially interesting is that Santa Cruz was one of the first cities in the country to experiment with the technology when it piloted, and then adopted, a predictive policing program in 2011. That program used historic and current crime data to break down some areas of the city into 500 foot by 500 foot blocks in order to pinpoint locations that were likely to be the scene of future crimes. However, after nine years, the city council voted unanimously to ban it over fears of how it perpetuated racial inequality.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on September 3, 2020
Data-Informed Predictive Policing Was Heralded As Less Biased. Is It?
Critics say it merely techwashes injustice.
By Annie Gilbertson for The Markup on August 20, 2020
Center for Critical Race and Digital Studies
The Center for Critical Race and Digital Studies produces cutting edge research that illuminates the ways that race, ethnicity and identity shape and are shaped by digital technologies.
From Center for Critical Race and Digital Studies