With the high pace development of AI systems, more and more people are trying to grapple with the potential impact of these systems on our societies and daily lives. One often utilized way to make sense of AI is through metaphors, that either help to clarify or horribly muddy the waters.Continue reading “Metaphors of AI: “Gunpowder of the 21st Century””
The devastating consequences of risk based profiling by the Dutch police
Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”Continue reading “The devastating consequences of risk based profiling by the Dutch police”
Hoogste tijd voor onderzoek institutioneel racisme gemeenten
Er moet een moratorium komen op het gebruik van algoritmes bij risicoprofilering, vindt Samira Rafaela, Europarlementariër van D66.
By Samira Rafaela for Binnenlands Bestuur on October 10, 2022
72 civil society organisations to the EU: “Abolish tracking-based online advertising”
The Racism and Technology Center co-signed an open letter asking the EU member states to make sure that the upcoming Digital Services Act will abolish so-called ‘dark patterns’ and advertising that is based on tracking and harvesting personal data.Continue reading “72 civil society organisations to the EU: “Abolish tracking-based online advertising””
Centering social injustice, de-centering tech
The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.Continue reading “Centering social injustice, de-centering tech”
Civil society calls on the EU to put fundamental rights first in the AI Act
Today, 30 November 2021, European Digital Rights (EDRi) and 114 civil society organisations launched a collective statement to call for an Artificial Intelligence Act (AIA) which foregrounds fundamental rights.
From European Digital Rights (EDRi) on November 30, 2021
Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’
The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.
By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021
Asha Allen: ‘The Brussels bubble: Advocating for the rights of marginalised women and girls in EU tech policy’
Since 2017, the issue of online violence against women and girls has increasingly crept up the EU political agenda. Thanks to the collective work of inspirational activists, I have the honour to work side-by-side with, making sure that the reality of the persistent harms racialised and marginalised women face is recognised as a marked win. This has not been without its challenges, particularly speaking as a young Black woman advocate in the Brussels political Bubble.
By Asha Allen for Who Writes The Rules on August 23, 2021
Europe wants to champion human rights. So why doesn’t it police biased AI in recruiting?
European jobseekers are being disadvantaged by AI bias in recruiting. How can a region that wants to champion human rights allow this?
By Nakeema Stefflbauer for Sifted on October 8, 2021
Nothing About Us, Without Us: Introducing Digital Rights for All
It is exciting, and it is just a beginning: on the 6 October 2021, the very first workshop of the Digital Rights for All programme will take place. It aims to promote meaningful, racial, social and economic justice initiatives to challenge discriminatory design, development, and use of technologies, through policy, advocacy, and strategic litigation efforts.
By Laurence Meyer for Digital Freedom Fund on October 4, 2021
Why ‘debiasing’ will not solve racist AI
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.Continue reading “Why ‘debiasing’ will not solve racist AI”
If AI is the problem, is debiasing the solution?
The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
The False Comfort of Human Oversight as an Antidote to A.I. Harm
Humans are being tasked with overseeing algorithms that were put in place with the promise of augmenting human deficiencies.
By Amba Kak and Ben Green for Slate Magazine on June 15, 2021
Reinforce rights, not racism: Why we must fight biometric mass surveillance in Europe
Gwendoline Delbos-Corfield MEP in conversation with Laurence Meyer, from the Digital Freedom Fund, about the dangers of the increasing use of biometric mass surveillance – both within the EU and outside it, as well as the impact it can have on the lives of people who are already being discriminated against.
By Gwendoline Delbos-Corfield and Laurence Meyer for Greens/EFA on June 24, 2021
Why Europe needs a new vocabulary to talk about race
In this article for Algorithm Watch, Nicolas Kayser-Bril highlights an important issue facing Europe in the fight against racist technologies: we lack the words to talk about racism. He shows why Europeans need a new vocabulary and discourse to understand and discuss racist AI systems. For example, concepts such as ‘Racial Justice’ have no part in the EU’s anti-discrimination agenda and ‘ethnicity’ is not recognised as a proxy for race in a digital context. The lack of this vocabulary greatly harms our current ability to challenge and dismantle these systems and, crucially, the root of racism.Continue reading “Why Europe needs a new vocabulary to talk about race”
Racist and classist predictive policing exists in Europe too
The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.Continue reading “Racist and classist predictive policing exists in Europe too”
The right to repair our devices is also a social justice issue
Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.Continue reading “The right to repair our devices is also a social justice issue”
Algorithmic discrimination in Europe : challenges and opportunities for gender equality and non-discrimination law.
This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.
By Janneke Gerards and Raphaële Xenidis for Publication Office of the European Union on March 10, 2021
EU’s new AI law risks enabling Orwellian surveillance states
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
Why EU needs to be wary that AI will increase racial profiling
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
This is the EU’s chance to stop racism in artificial intelligence
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
Europe’s artificial intelligence blindspot: Race
Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
Automated racism: How tech can entrench bias
Dutch benefits scandal highlights need for EU scrutiny.
By Nani Jansen Reventlow for POLITICO on March 2, 2021
Technology has codified structural racism – will the EU tackle racist tech?
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
Down with (discriminating) systems
As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.
By Sarah Chander for European Digital Rights (EDRi) on September 2, 2020