Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024
Let us explain. With cats
By Aaron Sankin and Natasha Uzcátegui-Liggett for The Markup on July 18, 2024
Ondanks de toeslagenaffaire gingen vorig jaar overheidsorganisaties door met het gebruik van ‘ondoordachte algoritmes’, schrijft de Autoriteit Persoonsgegevens.
By Jeroen Piersma for Het Financieele Dagblad on July 2, 2024
Er werd beloofd dat deze systemen eerlijker zouden worden, maar uit het nieuwe jaarverslag van de Autoriteit Persoonsgegevens blijkt dat er sindsdien weinig is verbeterd. Algoritmes categoriseren mensen met bepaalde kenmerken nog steeds onterecht als risico. Noëlle Cecilia, medeoprichter van Brush AI (@ai.brush) was zondag te gast bij Mandy. Zij maakt algoritmes voor bedrijven en deed een jaar lang onderzoek naar de eerlijkheid en discriminatie ervan. Zij legt ons uit waarom de mindset moet veranderen bij het ontwikkelen van AI-systemen.
By Noëlle Cecilia for Instagram on July 9, 2024
Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.
By Pie Sombetzki for AlgorithmWatch on June 26, 2024
In 2018, Lauren Rhue showed that two leading emotion detection software products had a racial bias against Black Men: Face++ thought they were more angry, and Microsoft AI thought they were more contemptuous.
Continue reading “Racist Technology in Action: AI detection of emotion rates Black basketball players as ‘angrier’ than their White counterparts”AI that purports to read our feelings may enhance user experience but concerns over misuse and bias mean the field is fraught with potential dangers.
By Ned Carter Miles for The Guardian on June 23, 2024
The New York Times published a fascinating overview of the American census forms since the late 18th century. It shows how the form keeps trying to ‘capture’ the country’s demographics, “creating and reshaping the ever-changing views of racial and ethnic identity.”
Continue reading “The datafication of race and ethnicity”Our own Hans de Zwart was a guest in the ‘Met Nerds om Tafel’ podcast. With Karen Palmer (creator of Consensus Gentium, a film about surveillance that watches you back), they discussed the role of art and storytelling in getting us ready for the future.
Continue reading “Podcast: Art as a prophetic activity for the future of AI”MyLife.com is one of those immoral American companies that collect personal information to sell onwards as profiles on the one hand, while at the same suggesting to the people that are being profiled that incriminating information about them exists online that they can get removed by buying a subscription (that then does nothing and auto-renews in perpetuity).
Continue reading “Racist Technology in Action: MyLife.com and discriminatory predation”In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
Last year, Investico revealed how DUO, the Dutch organization for administering student grants, was using a racist algorithm to decide which students would get a home visit to check for fraudulent behaviour. The Minister of Education immediately stopped the use of the algorithm.
Continue reading “Students with a non-European migration background had a 3.0 times higher chance of receiving an unfounded home visit from the Dutch student grants fraud department”For more than a year now, the Dutch Ministry of Foreign Affairs has ignored advice from its experts and continued its use of discriminatory risk profiling of visa applicants.
Continue reading “Dutch Ministry of Foreign Affairs dislikes the conclusions of a solid report that marks their visa process as discriminatory so buys a shoddy report saying the opposite”The Dutch Institute for Human Rights has commissioned research exploring the possible risks for discrimination and exclusion relating to the use of algorithms in education in the Netherlands.
Continue reading “Dutch Institute of Human Rights tells the government: “Test educational tools for possible discriminatory effects””The “I am not a typo” campaign is asking the tech giants to update their name dictionaries and stop autocorrecting the 41% of names given to babies in England and Wales.
Continue reading “Racist Technology in Action: Autocorrect is Western- and White-focused”DUO heeft de onafhankelijke stichting Algorithm Audit vervolgonderzoek laten doen naar de manier waarop DUO tussen 2012 en 2023 controleerde of een student terecht studiefinanciering ontving voor uitwonende studenten of niet. De conclusies van het vervolgonderzoek bevestigen dat studenten met een migratieachtergrond hierbij indirect zijn gediscrimineerd.
From Dienst Uitvoering Onderwijs (DUO) on May 21, 2024
Het was te mooi om waar te zijn: een algoritme om fraude in de bijstand op te sporen. Ondanks waarschuwingen bleef de gemeente Rotterdam er bijna vier jaar lang in geloven. Een handjevol ambtenaren, zich onvoldoende bewust van ethische risico’s, kon jarenlang ongestoord experimenteren met de data van kwetsbare mensen.
By Romy van Dijk and Saskia Klaassen for Vers Beton on October 23, 2023
Discriminerend algoritme: Volgens een onderzoek discrimineerde het algoritme dat Buitenlandse Zaken gebruikt om visumaanvragen te beoordelen. Uit onvrede met die conclusie vroeg het ministerie om een second opinion.
By Carola Houtekamer and Merijn Rengers for NRC on May 1, 2024
Bloomberg did a clever experiment: they had OpenAI’s GPT rank resumes and found that it shows a gender and racial bias just on the basis of the name of the candidate.
Continue reading “OpenAI’s GPT sorts resumes with a racial bias”In 2020, two NGOs finally forced the UK Home Office’s hand, compelling it to abandon its secretive and racist algorithm for sorting visitor visa applications. Foxglove and The Joint Council for the Welfare of Immigrants (JCWI) had been battling the algorithm for years, arguing that it is a form of institutionalized racism and calling it “speedy boarding for white people.”
Continue reading “Racist Technology in Action: The UK Home Office’s Sorting Algorithm and the Racist Violence of Borders”Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
By Davey Alba, Leon Yin, and Leonardo Nicoletti for Bloomberg on March 8, 2024
Wisconsin took down its dropout predictions after a Markup investigation. Here’s what two students we featured have to say.
By Maurice Newton and Mia Townsend for The Markup on December 21, 2023
So-called “smart” borders are just more sophisticated sites of racialized surveillance and violence. We need abolitionist tools to counter them.
By Ruha Benjamin for Inquest on February 13, 2024
A report on the Allegheny Family Screening Tool (pilot for predictive risk modeling in family policing) and its overestimation of utility and risk.
By Aaron Horowitz, Ana Gutierrez, Anjana Samant, Kath Xu, Marissa Gerchick, Noam Shemtov, Sophie Beiers, Tarak Shah, and Tobi Jegede for Logic on December 13, 2023
“It could happen again tomorrow” is one of the main devastating conclusions of the parlementary inquiry following the child benefits scandal.
Continue reading “The child benefits scandal: no lessons learned”Students are using ChatGPT for writing their essays. Antiplagiarism tools are trying to detect whether a text was written by AI. It turns out that these type of detectors consistently misclassify the text of non-native speakers as AI-generated.
Continue reading “Racist Technology in Action: ChatGPT detectors are biased against non-native English writers”Investigative reporting by The Markup showed how U.S. internet providers offer wildly different internet speeds for the same monthly fee. The neighbourhoods with the worst deals had lower median incomes and were very often the least White.
Continue reading “Racist Technology in Action: Slower internet service for the same price in U.S. lower income areas with fewer White residents”Even though the Dutch tax office (the Belastingdienst) was advised to immediately stop the use of three risk profiling algorithms, the office decided to continue their use, according to this reporting by Follow the Money.
Continue reading “Dutch Tax Office keeps breaking the law with their risk profiling algorithms”Na het toeslagenschandaal kreeg de Belastingdienst het advies om drie mogelijk discriminerende fraude-algoritmen onmiddellijk stop te zetten. Toch besloot de fiscus ermee door te gaan: het organisatiebelang woog zwaarder dan naleving van de wet en bescherming van grondrechten. Dat blijkt uit documenten die twee jaar nadat om openbaring was verzocht aan Follow the Money zijn vrijgegeven. ‘Onbegrijpelijk en verbijsterend.’
By David Davidson and Sebastiaan Brommersma for Follow the Money on December 14, 2023
Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.
By John Albert for AlgorithmWatch on November 17, 2023
Tumoren ontdekken, nieuwe medicijnen ontwikkelen – beloftes genoeg over wat kunstmatige intelligentie kan betekenen voor de medische wereld. Maar voordat je zulk belangrijk werk kunt overlaten aan technologie, moet je precies snappen hoe die werkt. En zover zijn we nog lang niet.
By Maurits Martijn for De Correspondent on November 6, 2023
Parent company Meta says bug caused ‘inappropriate’ auto-translations and was now fixed while employee says it pushed ‘a lot of people over the edge’.
By Josh Taylor for The Guardian on October 20, 2023
A report commission by Meta — Facebook and Instagram’s parent company — found bias against Palestinians during an Israeli assault last May.
By Sam Biddle for The Intercept on September 21, 2022
In a world where swiping left or right is the main route to love, whose profiles dating apps show you can change the course of your life.
Continue reading “Equal love: Dating App Breeze seeks to address Algorithmic Discrimination”In its online series of digital dilemmas, Al Jazeera takes a look at AI in relation to social inequities. Loyal readers of this newsletter will recognise many of the examples they touch on, like how Stable Diffusion exacerbates and amplifies racial and gender disparities or the Dutch childcare benefits scandal.
Continue reading “Al Jazeera asks: Can AI eliminate human bias or does it perpetuate it?”This collaborative investigative effort by Spotlight Bureau, Lighthouse Reports and Follow the Money, dives into the story of a Moroccan-Dutch family in Veenendaal which was targeted for fraud by the Dutch government.
Continue reading “Racist Technology in Action: Flagged as risky simply for requesting social assistance in Veenendaal, The Netherlands”Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä for MIT Technology Review on September 25, 2023
Bij het gebruik van technologie worden onze maatschappelijke problemen gereflecteerd en soms verergerd. Die maatschappelijke problemen kennen een lange geschiedenis van oneerlijke machtsstructuren, racisme, seksisme en andere vormen van discriminatie. Wij zien het als onze taak om die oneerlijke structuren te herkennen en ons daartegen te verzetten.
By Evely Austin, Ilja Schurink and Nadia Benaissa for Bits of Freedom on September 12, 2023
De politie stopt ‘per direct’ met het algoritme waarmee ze voorspelt of iemand in de toekomst geweld gaat gebruiken. Eerder deze week onthulde Follow the Money dat het zogeheten Risicotaxatie Instrument Geweld op ethisch en statistisch gebied ondermaats is.
By David Davidson for Follow the Money on August 25, 2023
De politie voorspelt al sinds 2015 met een algoritme wie er in de toekomst geweld zal plegen. Van Marokkaanse en Antilliaanse Nederlanders werd die kans vanwege hun achtergrond groter geschat. Dat gebeurt nu volgens de politie niet meer, maar daarmee zijn de gevaren van het model niet opgelost. ‘Aan dit algoritme zitten enorme risico’s.’
By David Davidson and Marc Schuilenburg for Follow the Money on August 23, 2023
For many years the Dutch police has used a risk modeling algorithm to predict the chance that an individual suspect will commit a violent crime. Follow the Money exposed the total lack of a moral, legal, and statistical justification for its use, and now the police has stopped using the system.
Continue reading “Dutch police used algorithm to predict violent behaviour without any safeguards”Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.