In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.
By Charlton McIlwain for Logic on December 20, 2021
In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.
By Charlton McIlwain for Logic on December 20, 2021
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”Emails show that the LAPD repeatedly asked camera owners for footage during the demonstrations, raising First Amendment concerns.
By Sam Biddle for The Intercept on February 16, 2021
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
In a new book, a sociologist who spent months embedded with the LAPD details how data-driven policing techwashes bias.
By Mara Hvistendahl for The Intercept on January 30, 2021
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
Who holds the power in tech?
By Cory Doctorow for Slate Magazine on October 26, 2019
A conversation about how to break cages.
By Sarah T. Hamid for Logic on August 31, 2020
By Antonella Napolitano, Chris Jones, Kostantinos Kakavoulis and Sarah Chander for European Digital Rights (EDRi) on November 1, 2020
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020
Privacy: Ondanks de toeslagenaffaire blijft de overheid dubieuze algoritmes gebruiken, ziet Dagmar Oudshoorn. Tijd voor een toezichthouder.
By Dagmar Oudshoorn for NRC on October 14, 2020
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
European Digital Rights (EDRi) recommendations to inform the European Commission Action Plan on Structural Racism.
By Petra Molnar and Sarah Chander for European Digital Rights (EDRi) on July 1, 2020
In June 2020, Santa Cruz, California became the first city in the United States to ban municipal use of predictive policing, a method of deploying law enforcement resources according to data-driven analytics that supposedly are able to predict perpetrators, victims, or locations of future crimes. Especially interesting is that Santa Cruz was one of the first cities in the country to experiment with the technology when it piloted, and then adopted, a predictive policing program in 2011. That program used historic and current crime data to break down some areas of the city into 500 foot by 500 foot blocks in order to pinpoint locations that were likely to be the scene of future crimes. However, after nine years, the city council voted unanimously to ban it over fears of how it perpetuated racial inequality.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on September 3, 2020
Critics say it merely techwashes injustice.
By Annie Gilbertson for The Markup on August 20, 2020
The Center for Critical Race and Digital Studies produces cutting edge research that illuminates the ways that race, ethnicity and identity shape and are shaped by digital technologies.
From Center for Critical Race and Digital Studies
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.