Raziye Buse Çetin: ‘The absence of marginalised people in AI policymaking’

Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.

By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019

Dr Nakeema Stefflbauer: ‘#defundbias in online hiring and listen to the people in Europe whom AI algorithms harm’

The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.

By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021

Asha Allen: ‘The Brussels bubble: Advocating for the rights of marginalised women and girls in EU tech policy’

Since 2017, the issue of online violence against women and girls has increasingly crept up the EU political agenda. Thanks to the collective work of inspirational activists, I have the honour to work side-by-side with, making sure that the reality of the persistent harms racialised and marginalised women face is recognised as a marked win. This has not been without its challenges, particularly speaking as a young Black woman advocate in the Brussels political Bubble.

By Asha Allen for Who Writes The Rules on August 23, 2021

Nothing About Us, Without Us: Introducing Digital Rights for All

It is exciting, and it is just a beginning: on the 6 October 2021, the very first workshop of the Digital Rights for All programme will take place. It aims to promote meaningful, racial, social and economic justice initiatives to challenge discriminatory design, development, and use of technologies, through policy, advocacy, and strategic litigation efforts.

By Laurence Meyer for Digital Freedom Fund on October 4, 2021

If AI is the problem, is debiasing the solution?

The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.

By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021

Why Europe needs a new vocabulary to talk about race

In this article for Algorithm Watch, Nicolas Kayser-Bril highlights an important issue facing Europe in the fight against racist technologies: we lack the words to talk about racism. He shows why Europeans need a new vocabulary and discourse to understand and discuss racist AI systems. For example, concepts such as ‘Racial Justice’ have no part in the EU’s anti-discrimination agenda and ‘ethnicity’ is not recognised as a proxy for race in a digital context. The lack of this vocabulary greatly harms our current ability to challenge and dismantle these systems and, crucially, the root of racism.

Continue reading “Why Europe needs a new vocabulary to talk about race”

Racist and classist predictive policing exists in Europe too

The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.

Continue reading “Racist and classist predictive policing exists in Europe too”

The right to repair our devices is also a social justice issue

Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.

Continue reading “The right to repair our devices is also a social justice issue”

Algorithmic discrimination in Europe : challenges and opportunities for gender equality and non-discrimination law.

This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.

By Janneke Gerards and Raphaële Xenidis for Publication Office of the European Union on March 10, 2021

Why EU needs to be wary that AI will increase racial profiling

This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.

By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021

This is the EU’s chance to stop racism in artificial intelligence

As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.

By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021

Down with (discriminating) systems

As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.

By Sarah Chander for European Digital Rights (EDRi) on September 2, 2020

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑