Surveilling Europe’s edges: when research legitimises border violence

In May 2024, Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders. In the second instalment of a three-part blog series, she explains how EU-funded research projects on border surveillance are legitimising violent migration policies. Catch up on part one here.

By Caterina Rodelli for Access Now on September 25, 2024

Surveilling Europe’s edges: detention centres as a blueprint for mass surveillance

In May 2024, Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders. In the third and final instalment of a three-part blog series, she explains how new migrant detention centres on the Greek island of Samos are shaping the blueprint for EU-wide mass surveillance.

By Caterina Rodelli for Access Now on October 2, 2024

Surveilling Europe’s edges: when digitalisation means dehumanisation

In May 2024, Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders. In the first of a three-part blog series reflecting on what she saw, Caterina explains how, all too often, digitalising borders dehumanises the people trying to cross them.

By Caterina Rodelli for Access Now on September 18, 2024

Not a solution: Meta’s new AI system to contain discriminatory ads

Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.

By John Albert for AlgorithmWatch on November 17, 2023

Vooral vrouwen van kleur klagen de vooroordelen van AI aan

Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.

By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023

The devastating consequences of risk based profiling by the Dutch police

Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”

Continue reading “The devastating consequences of risk based profiling by the Dutch police”

Centering social injustice, de-centering tech

The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.

Continue reading “Centering social injustice, de-centering tech”

Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’

Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.

By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019

Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’

The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.

By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021

Asha Allen: ‘The Brussels bubble: Advocating for the rights of marginalised women and girls in EU tech policy’

Since 2017, the issue of online violence against women and girls has increasingly crept up the EU political agenda. Thanks to the collective work of inspirational activists, I have the honour to work side-by-side with, making sure that the reality of the persistent harms racialised and marginalised women face is recognised as a marked win. This has not been without its challenges, particularly speaking as a young Black woman advocate in the Brussels political Bubble.

By Asha Allen for Who Writes The Rules on August 23, 2021

Nothing About Us, Without Us: Introducing Digital Rights for All

It is exciting, and it is just a beginning: on the 6 October 2021, the very first workshop of the Digital Rights for All programme will take place. It aims to promote meaningful, racial, social and economic justice initiatives to challenge discriminatory design, development, and use of technologies, through policy, advocacy, and strategic litigation efforts.

By Laurence Meyer for Digital Freedom Fund on October 4, 2021

If AI is the problem, is debiasing the solution?

The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.

By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021

Why Europe needs a new vocabulary to talk about race

In this article for Algorithm Watch, Nicolas Kayser-Bril highlights an important issue facing Europe in the fight against racist technologies: we lack the words to talk about racism. He shows why Europeans need a new vocabulary and discourse to understand and discuss racist AI systems. For example, concepts such as ‘Racial Justice’ have no part in the EU’s anti-discrimination agenda and ‘ethnicity’ is not recognised as a proxy for race in a digital context. The lack of this vocabulary greatly harms our current ability to challenge and dismantle these systems and, crucially, the root of racism.

Continue reading “Why Europe needs a new vocabulary to talk about race”

Racist and classist predictive policing exists in Europe too

The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.

Continue reading “Racist and classist predictive policing exists in Europe too”

The right to repair our devices is also a social justice issue

Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.

Continue reading “The right to repair our devices is also a social justice issue”

Algorithmic discrimination in Europe : challenges and opportunities for gender equality and non-discrimination law.

This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.

By Janneke Gerards and Raphaële Xenidis for Publication Office of the European Union on March 10, 2021

Why EU needs to be wary that AI will increase racial profiling

This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.

By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021

This is the EU’s chance to stop racism in artificial intelligence

As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.

By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021

Down with (discriminating) systems

As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.

By Sarah Chander for European Digital Rights (EDRi) on September 2, 2020

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑