Voyager, which pitches its tech to police, has suggested indicators such as Instagram usernames that show Arab pride can signal inclination towards extremism.
By Johana Bhuiyan and Sam Levin for The Guardian on November 17, 2021
Voyager, which pitches its tech to police, has suggested indicators such as Instagram usernames that show Arab pride can signal inclination towards extremism.
By Johana Bhuiyan and Sam Levin for The Guardian on November 17, 2021
In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.
Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on October 21, 2021
Chris Gilliard grew up with racist policing in Detroit. He sees a new form of oppression in the tech we use every day.
By Chris Gilliard and Will Oremus for Washington Post on September 17, 2021
Brands originally built on racist stereotypes have existed for more than a century. Now racial prejudice is also creeping into the design of tech products and algorithms.
From YouTube on September 15, 2021
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”Chicago’s predictive policing program told a man he would be involved with a shooting, but it couldn’t determine which side of the gun he would be on. Instead, it made him the victim of a violent crime.
By Matt Stroud for The Verge on May 24, 2021
The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.
Continue reading “Racist and classist predictive policing exists in Europe too”“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.
Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
Charlton McIlwain, author of the book Black Software, takes a good hard look at IBM in a longread for Logic magazine.
Continue reading “IBM is failing to increase diversity while successfully producing racist information technologies”When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
By Kashmir Hill for The New York Times on March 18, 2021
Race, tech, and medicine: Remarks from Dr. Dorothy Roberts and Dr. Ruha Benjamin.
By Dorothy Roberts, Kim M Reynolds and Ruha Benjamin for Our Data Bodies Project on August 15, 2020
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.
By Gabriel Geiger for VICE on March 1, 2021
In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.
By Charlton McIlwain for Logic on December 20, 2021
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.
Continue reading “The Dutch government’s love affair with ethnic profiling”Emails show that the LAPD repeatedly asked camera owners for footage during the demonstrations, raising First Amendment concerns.
By Sam Biddle for The Intercept on February 16, 2021
Technology has never been colorblind. It’s time to abolish notions of “universal” users of software.
From The Internet Health Report 2020 on January 1, 2021
In a new book, a sociologist who spent months embedded with the LAPD details how data-driven policing techwashes bias.
By Mara Hvistendahl for The Intercept on January 30, 2021
As many of us had our attention focused on the use of biometric surveillance technologies in managing the COVID-19 pandemic, in a new UN report prof. E. Tendayi Achiume forcefully puts the spotlight on the racial and discriminatory dimension of biometric surveillance technology in border enforcement.
Continue reading “Racism and “Smart Borders””China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect.
By Deborah Raji for MIT Technology Review on December 10, 2020
Who holds the power in tech?
By Cory Doctorow for Slate Magazine on October 26, 2019
A conversation about how to break cages.
By Sarah T. Hamid for Logic on August 31, 2020
By Antonella Napolitano, Chris Jones, Kostantinos Kakavoulis and Sarah Chander for European Digital Rights (EDRi) on November 1, 2020
Insiders say Dataminr’s “algorithmic” Twitter search involves human staffers perpetuating confirmation biases.
By Sam Biddle for The Intercept on October 21, 2020
Privacy: Ondanks de toeslagenaffaire blijft de overheid dubieuze algoritmes gebruiken, ziet Dagmar Oudshoorn. Tijd voor een toezichthouder.
By Dagmar Oudshoorn for NRC on October 14, 2020
In this interview with Jair Schalkwijk and Naomi Appelman, we try to bring some transparency to the use of facial recognition technologies in law enforcement.
By Margarita Osipian for The Hmm on October 8, 2020
European Digital Rights (EDRi) recommendations to inform the European Commission Action Plan on Structural Racism.
By Petra Molnar and Sarah Chander for European Digital Rights (EDRi) on July 1, 2020
In June 2020, Santa Cruz, California became the first city in the United States to ban municipal use of predictive policing, a method of deploying law enforcement resources according to data-driven analytics that supposedly are able to predict perpetrators, victims, or locations of future crimes. Especially interesting is that Santa Cruz was one of the first cities in the country to experiment with the technology when it piloted, and then adopted, a predictive policing program in 2011. That program used historic and current crime data to break down some areas of the city into 500 foot by 500 foot blocks in order to pinpoint locations that were likely to be the scene of future crimes. However, after nine years, the city council voted unanimously to ban it over fears of how it perpetuated racial inequality.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on September 3, 2020
Critics say it merely techwashes injustice.
By Annie Gilbertson for The Markup on August 20, 2020
The Center for Critical Race and Digital Studies produces cutting edge research that illuminates the ways that race, ethnicity and identity shape and are shaped by digital technologies.
From Center for Critical Race and Digital Studies
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.