We belief that software used for monitoring students during online tests (so-called proctoring software) should be abolished because it discriminates against students with a darker skin colour.
Continue reading “How our world is designed for the ‘reference man’ and why proctoring should be abolished”Two new technology initiatives focused on (racial) justice
We are happy to see that more and more attention is being paid to how technology intersects with problems around (racial) justice. Recently two new initiatives have launched that we would like to highlight.
Continue reading “Two new technology initiatives focused on (racial) justice”Dutch Scientific Council knows: AI is neither neutral nor always rational
AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.
Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”Racist Technology in Action: an AI for ethical advice turns out to be super racist
In mid October 2021, the Allen Institute for AI launched Delphi, an AI in the form of a research prototype that is designed “to model people’s moral judgments on a variety of everyday situations.” In simple words: they made a machine that tries to do ethics.
Continue reading “Racist Technology in Action: an AI for ethical advice turns out to be super racist”Regulating big tech to make sure nobody is excluded
Our very own Naomi Appelman was interviewed for Atlas, a Dutch television show about science and current affairs. She talked about her research into what laws and regulations democracies should develop to ensure that large technology companies don’t unnecessarily exclude people.
Continue reading “Regulating big tech to make sure nobody is excluded”Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”Proof for Twitter’s bias toward lighter faces
We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
Continue reading “Proof for Twitter’s bias toward lighter faces”Racist Technology in Action: Racist search engine ads
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”Are we automating racism?
Vox host Joss Fong wanted to know… “Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?”
Continue reading “Are we automating racism?”Tech companies poured 3.8 billion USD into racial justice, but to what avail?
The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.
Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”Long overdue: Google has improved its camera app to work better for Black people
The following short video by Vox shows how white skin has always been the norm in photography. Black people didn’t start to look good on film until in the 1970s furniture makers complained to Kodak that their film didn’t render the difference between dark and light grained wood, and chocolate companies were upset that you couldn’t see the difference between dark and light chocolate.
Continue reading “Long overdue: Google has improved its camera app to work better for Black people”Racist Technology in Action: Predicting future criminals with a bias against Black people
In 2016, ProPublica investigated the fairness of COMPAS, a system used by the courts in the United States to assess the likelihood of a defendant committing another crime. COMPAS uses a risk assessment form to assess this risk of a defendant offending again. Judges are expected to take this risk prediction into account when they decide on sentencing.
Continue reading “Racist Technology in Action: Predicting future criminals with a bias against Black people”The right to repair our devices is also a social justice issue
Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.
Continue reading “The right to repair our devices is also a social justice issue”Rotterdam’s use of algorithms could lead to ethnic profiling
The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.
Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”IBM is failing to increase diversity while successfully producing racist information technologies
Charlton McIlwain, author of the book Black Software, takes a good hard look at IBM in a longread for Logic magazine.
Continue reading “IBM is failing to increase diversity while successfully producing racist information technologies”Racist technology in action: White only soap dispensers
In 2015, when T.J. Fitzpatrick visited a conference in Atlanta, he wasn’t able to use any of the soap dispensers in the bathroom.
Continue reading “Racist technology in action: White only soap dispensers”The internet doesn’t have ‘universal’ users
Since 2017, Mozilla – the makers of the Firefox browser – have written a yearly report on the health of the internet. This year’s report focuses on labor rights, transparency and racial justice. The piece about racial justice makes an interesting argument about how the sites we see on the first page of a search engine are a reflection of the general popularity of these sites or their ability to pay for a top result. This leads to a ‘mainstream’ bias.
Continue reading “The internet doesn’t have ‘universal’ users”Google fires AI researcher Timnit Gebru
Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.
Continue reading “Google fires AI researcher Timnit Gebru”A year of algorithms behaving badly
The Markup has published an overview of the ways in which algorithms have been given decisional powers in 2020 and have taken a wrong turn.
Continue reading “A year of algorithms behaving badly”