Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
By Amanda Silverman, Eileen Guo, Eva Constantaras, Gabriel Geiger, and Justin-Casimir Braun for Lighthouse Reports on June 11, 2025
More than half of students are now using generative AI, casting a shadow over campuses as tutors and students turn on each other and hardworking learners are caught in the flak. Will Coldwell reports on a broken system.
By Will Coldwell for The Guardian on December 15, 2024
In de uitzending van 27 mei 2024 kijkt Radar naar banken die controleren op afkomst of Niet-Nederlandse achternamen. We weten uit de toeslagenaffaire dat er op die manier is geprofileerd door de Belastingdienst. Maar hoe zit dat bij financiële instellingen, zoals banken? En wat als je alleen vanwege je achternaam of je geboorteplaats geen gebruik kan maken van je betaalrekening, of zelfs je rekening wordt opgeheven?
From Radar on May 27, 2024
Als de Rabobank denkt dat je een frauderisico bent, onderwerpt de bank je aan een onderzoek. Daarin vraagt de bank naar privégegevens, en zelfs naar wat je met je advocaat bespreekt. Als je je onschuld onvoldoende bewijst, zet de bank je de deur uit, ook zonder onderbouwd vermoeden van fraude. Een moeder van twee jonge kinderen dreigt zo haar huis te verliezen.
By Jan-Hein Strop for Follow the Money on October 19, 2024
De Rabobank neemt maandelijks ‘trots’ afscheid van circa tienduizend klanten die een risico op witwassen opleveren, zei Rabo-bestuurder Philippe Vollot vorige maand tijdens een besloten bijeenkomst. De bank wijst ook mensen de deur van wie niet zeker is of ze iets verkeerds doen, erkende Vollot.
By Jan-Hein Strop for Follow the Money on October 18, 2024
Common Sense, an education platform that advocates and advises for an equitable and safe school environment, published a report last month on the adoption of generative AI at home and school. Parents, teachers, and children were surveyed to better understand the adoption and effects of the technology.
Continue reading “Falsely Flagged: The AI-Driven Discrimination Black Students Face”Kunstmatige intelligentie zou ervoor zorgen dat er tijdens oorlogen minder burgerdoden vallen. In realiteit vallen er juist meer. Want waar mensen worden gereduceerd tot datapunten, voelt vuren al snel als objectief en correct.
By Lauren Gould, Linde Arentze, and Marijn Hoijtink for De Groene Amsterdammer on July 24, 2024
The Detroit Police Department arrested three people after bad facial recognition matches, a national record. But it’s adopting new policies that even the A.C.L.U. endorses.
By Kashmir Hill for The New York Times on June 29, 2024
Three men falsely arrested based on face recognition technology have joined the fight against a California bill that aims to place guardrails around police use of the technology. They say it will still allow abuses and misguided arrests.
By Khari Johnson for The Markup on June 12, 2024
Live facial recognition is becoming increasingly common on UK high streets. Should we be worried?
By James Clayton for BBC on May 25, 2024
The ubiquitous availability of AI has made plagiarism detection software utterly useless, argues our Hans de Zwart in the Volkskrant.
Continue reading “AI detection has no place in education”A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
Porcha Woodruff thought the police who showed up at her door to arrest her for carjacking were joking. She is the first woman known to be wrongfully accused as a result of facial recognition technology.
By Kashmir Hill for The New York Times on August 6, 2023
The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse.
By Eric Wu, James Zou, Mert Yuksekgonul, Weixin Liang and Yining Mao for arXiv.org on April 18, 2023
Because of a bad facial recognition match and other hidden technology, Randal Reid spent nearly a week in jail, falsely accused of stealing purses in a state he said he had never even visited.
By Kashmir Hill and Ryan Mac for The New York Times on March 31, 2023
The Markup found the state’s decade-old dropout prediction algorithms don’t work and may be negatively influencing how educators perceive students of color.
By Todd Feathers for The Markup on April 27, 2023
Cities and counties across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, Jackson, Mississippi to Minneapolis, elected officials and activists know that face surveillance gives police the power to track us wherever we go. It also disproportionately impacts people of color, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in first amendment protected activities. Even Amazon, known for operating one of the largest video surveillance networks in the history of the world, extended its moratorium on selling face recognition to police.
By Matthew Guariglia for Electronic Frontier Foundation (EFF) on April 4, 2023
Ignoring earlier Dutch failures in automated decision making, and ignoring advice from its own experts, the Dutch ministry of Foreign Affairs has decided to cut costs and cut corners through implementing a discriminatory profiling system to process visa applications.
Continue reading “Racist Technology in Action: You look similar to someone we didn’t like → Dutch visa denied”In the context of the use of crime predictive software in policing, Chris Gilliard reiterated in WIRED how data-driven policing systems and programs are fundamentally premised on the assumption that historical data about crimes determines the future.
Continue reading “Predictive policing constrains our possibilities for better futures”So long as algorithms are trained on racist historical data and outdated values, there will be no opportunities for change.
By Chris Gilliard for WIRED on January 2, 2022
Activists say the biometric tools, developed principally around white datasets, risk reinforcing racist practices.
By Charlotte Peet for Rest of World on October 22, 2021
A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition.
By Kashmir Hill for The New York Times on December 29, 2020
Enabling Apple’s “Limit Adult Websites” filter in the iOS Screen Time setting will block users from seeing any Google search results for “Asian” in any browser on their iPhone. That’s not great, folks.
By Victoria Song for Gizmodo on February 4, 2021
In a new book, a sociologist who spent months embedded with the LAPD details how data-driven policing techwashes bias.
By Mara Hvistendahl for The Intercept on January 30, 2021
China dat kunstmatige intelligentie inzet om Oeigoeren te onderdrukken: klinkt als een ver-van-je-bed-show? Ook Nederland (ver)volgt specifieke bevolkingsgroepen met algoritmes. Zoals in Roermond, waar camera’s alarm slaan bij auto’s met een Oost-Europees nummerbord.
By Florentijn van Rootselaar for OneWorld on January 14, 2021
How many more Black men will be wrongfully arrested before this country puts a stop to the unregulated use of facial recognition software?
From Washington Post on December 31, 2020
Zoals de dood van George Floyd leidde tot wereldwijde protesten, zo deed de vooringenomen beeldverwerkingstechnologie PULSE dat in de wetenschappelijke wereld. Er werd opgeroepen tot een verbod, maar neuro-informaticus Sennay Ghebreab vraagt zich af of een digitale beeldenstorm het probleem oplost.
By Sennay Ghebreab for Vrij Nederland on October 5, 2020
Algorithmic Copyright Management: Background Audio, False Positives and De facto Censorship
By Adam Holland and Nick Simmons for Lumen on July 21, 2020
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.