With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.
By Garrison Lovely for Jacobin on January 22, 2025
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
In May 2024, Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders. In the second instalment of a three-part blog series, she explains how EU-funded research projects on border surveillance are legitimising violent migration policies. Catch up on part one here.
By Caterina Rodelli for Access Now on September 25, 2024
In May 2024, Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders. In the third and final instalment of a three-part blog series, she explains how new migrant detention centres on the Greek island of Samos are shaping the blueprint for EU-wide mass surveillance.
By Caterina Rodelli for Access Now on October 2, 2024
In May 2024, Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders. In the first of a three-part blog series reflecting on what she saw, Caterina explains how, all too often, digitalising borders dehumanises the people trying to cross them.
By Caterina Rodelli for Access Now on September 18, 2024
AI that purports to read our feelings may enhance user experience but concerns over misuse and bias mean the field is fraught with potential dangers.
By Ned Carter Miles for The Guardian on June 23, 2024
Algorithm Watch experimented with three major generative AI tools, generating 8,700 images of politicians. They found that all these tools make an active effort to lessen bias, but that the way they attempt to do this is problematic.
Continue reading “How generative AI tools represent EU politicians: in a biased way”In the run-up to the EU elections, AlgorithmWatch has investigated which election-related images can be generated by popular AI systems. Two of the largest providers don’t adhere to security measures they have announced themselves recently.
By Nicolas Kayser-Bril for AlgorithmWatch on May 29, 2024
A conversation with Dr. Joy Buolamwini.
By Joy Buolamwini and Nabiha Syed for The Markup on November 18, 2023
Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.
By John Albert for AlgorithmWatch on November 17, 2023
Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
As EU institutions start decisive meetings on the Artificial Intelligence (AI) Act, a broad civil society coalition is urging them to prioritise people and fundamental rights.
From European Digital Rights (EDRi) on July 12, 2023
Text-to-image models amplify stereotypes about race and gender — here’s why that matters.
By Dina Bass and Leonardo Nicoletti for Bloomberg on June 1, 2023
In this session, we explored how the EU Charter right to non-discrimination can be (and has been) used to fight back against discriminatory e-proctoring systems.
By Naomi Appelman and Robin Pocornie for Digital Freedom Fund on May 31, 2023
With the high pace development of AI systems, more and more people are trying to grapple with the potential impact of these systems on our societies and daily lives. One often utilized way to make sense of AI is through metaphors, that either help to clarify or horribly muddy the waters.
Continue reading “Metaphors of AI: “Gunpowder of the 21st Century””Diana Sardjoe writes for Fair Trials about how her sons were profiled by the Amsterdam police on the basis of risk models (a form of predictive policing) called ‘Top600’ (for adults) and ‘Top400’ for people aged 12 to 23). Because of this profiling her sons were “continually monitored and harassed by police.”
Continue reading “The devastating consequences of risk based profiling by the Dutch police”Er moet een moratorium komen op het gebruik van algoritmes bij risicoprofilering, vindt Samira Rafaela, Europarlementariër van D66.
By Samira Rafaela for Binnenlands Bestuur on October 10, 2022
The Racism and Technology Center co-signed an open letter asking the EU member states to make sure that the upcoming Digital Services Act will abolish so-called ‘dark patterns’ and advertising that is based on tracking and harvesting personal data.
Continue reading “72 civil society organisations to the EU: “Abolish tracking-based online advertising””The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.
Continue reading “Centering social injustice, de-centering tech”Today, 30 November 2021, European Digital Rights (EDRi) and 114 civil society organisations launched a collective statement to call for an Artificial Intelligence Act (AIA) which foregrounds fundamental rights.
From European Digital Rights (EDRi) on November 30, 2021
Creating welcoming and safe spaces for racialised people in policymaking is essential for addressing AI harms. Since the beginning of my career as an AI policy researcher, I’ve witnessed many important instances where people of color were almost totally absent from AI policy conversations. I remember very well the feeling of discomfort I had experienced when I was stopped at the entrance of a launch event for a report on algorithmic bias. The person who was tasked with ushering people into the meeting room was convinced that I was not “in the right place”. Following a completely avoidable policing situation; I was in the room, but the room didn’t seem right to me. Although the topic was algorithmic bias and discrimination, I couldn’t spot one racialised person there — people who are most likely to experience algorithmic harm.
By Raziye Buse Çetin for Who Writes The Rules on March 11, 2019
The first time I applied to work at a European company, my interviewer verbally grilled me about my ethnic origin. “Is your family from Egypt? Morocco? Are you Muslim?” asked a white Belgian man looking for a project manager. He was the CEO. My CV at the time was US-style, without a photograph, but with descriptions of research I had conducted at various Middle East and North African universities. I’d listed my nationality and my BA, MA, and PhD degrees, which confirmed my Ivy League graduate status several times over. “Are either of your parents Middle Eastern?” the CEO persisted.
By Nakeema Stefflbauer for Who Writes The Rules on August 23, 2021
Since 2017, the issue of online violence against women and girls has increasingly crept up the EU political agenda. Thanks to the collective work of inspirational activists, I have the honour to work side-by-side with, making sure that the reality of the persistent harms racialised and marginalised women face is recognised as a marked win. This has not been without its challenges, particularly speaking as a young Black woman advocate in the Brussels political Bubble.
By Asha Allen for Who Writes The Rules on August 23, 2021
European jobseekers are being disadvantaged by AI bias in recruiting. How can a region that wants to champion human rights allow this?
By Nakeema Stefflbauer for Sifted on October 8, 2021
It is exciting, and it is just a beginning: on the 6 October 2021, the very first workshop of the Digital Rights for All programme will take place. It aims to promote meaningful, racial, social and economic justice initiatives to challenge discriminatory design, development, and use of technologies, through policy, advocacy, and strategic litigation efforts.
By Laurence Meyer for Digital Freedom Fund on October 4, 2021
Policy makers are starting to understand that many systems running on AI exhibit some form of racial bias. So they are happy when computer scientists tell them that ‘debiasing’ is a solution for these problems: testing the system for racial and other forms of bias, and making adjustments until these no longer show up in the results.
Continue reading “Why ‘debiasing’ will not solve racist AI”The development and deployment of artificial intelligence (AI) in all areas of public life have raised many concerns about the harmful consequences on society, in particular the impact on marginalised communities. EDRi’s latest report “Beyond Debiasing: Regulating AI and its Inequalities”, authored by Agathe Balayn and Dr. Seda Gürses,* argues that policymakers must tackle the root causes of the power imbalances caused by the pervasive use of AI systems. In promoting technical ‘debiasing’ as the main solution to AI driven structural inequality, we risk vastly underestimating the scale of the social, economic and political problems AI systems can inflict.
By Agathe Balayn and Seda Gürses for European Digital Rights (EDRi) on September 21, 2021
Humans are being tasked with overseeing algorithms that were put in place with the promise of augmenting human deficiencies.
By Amba Kak and Ben Green for Slate Magazine on June 15, 2021
Gwendoline Delbos-Corfield MEP in conversation with Laurence Meyer, from the Digital Freedom Fund, about the dangers of the increasing use of biometric mass surveillance – both within the EU and outside it, as well as the impact it can have on the lives of people who are already being discriminated against.
By Gwendoline Delbos-Corfield and Laurence Meyer for Greens/EFA on June 24, 2021
In this article for Algorithm Watch, Nicolas Kayser-Bril highlights an important issue facing Europe in the fight against racist technologies: we lack the words to talk about racism. He shows why Europeans need a new vocabulary and discourse to understand and discuss racist AI systems. For example, concepts such as ‘Racial Justice’ have no part in the EU’s anti-discrimination agenda and ‘ethnicity’ is not recognised as a proxy for race in a digital context. The lack of this vocabulary greatly harms our current ability to challenge and dismantle these systems and, crucially, the root of racism.
Continue reading “Why Europe needs a new vocabulary to talk about race”The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.
Continue reading “Racist and classist predictive policing exists in Europe too”Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.
Continue reading “The right to repair our devices is also a social justice issue”This report investigates how algorithmic discrimination challenges the set of legal guarantees put in place in Europe to combat discrimination and ensure equal treatment. More specifically, it examines whether and how the current gender equality and non-discrimination legislative framework in place in the EU can adequately capture and redress algorithmic discrimination. It explores the gaps and weaknesses that emerge at both the EU and national levels from the interaction between, on the one hand, the specific types of discrimination that arise when algorithms are used in decision-making systems and, on the other, the particular material and personal scope of the existing legislative framework. This report also maps out the existing legal solutions, accompanying policy measures and good practice to address and redress algorithmic discrimination both at EU and national levels. Moreover, this report proposes its own integrated set of legal, knowledge-based and technological solutions to the problem of algorithmic discrimination.
By Janneke Gerards and Raphaële Xenidis for Publication Office of the European Union on March 10, 2021
“Far from a ‘human-centred’ approach, the draft law in its current form runs the risk of enabling Orwellian surveillance states,” writes @sarahchander from @edri.
By Sarah Chander for Euronews on April 22, 2021
This week the EU announces new regulations on artificial intelligence. It needs to set clear limits on the most harmful uses of AI, including predictive policing, biometric mass surveillance, and applications that exacerbate historic patterns of racist policing.
By Fieke Jansen and Sarah Chander for EUobserver on April 19, 2021
As the European Commission prepares its legislative proposal on artificial intelligence, human rights groups are watching closely for clear rules to limit discriminatory AI. In practice, this means a ban on biometric mass surveillance practices and red lines (legal limits) to stop harmful uses of AI-powered technologies.
By Sarah Chander for European Digital Rights (EDRi) on March 16, 2021
Upcoming rules on AI might make Europe’s race issues a tech problem too.
By Melissa Heikkilä for POLITICO on March 16, 2021
Dutch benefits scandal highlights need for EU scrutiny.
By Nani Jansen Reventlow for POLITICO on March 2, 2021
The EU is preparing its ‘Action Plan’ to address structural racism in Europe. With digital high on the EU’s legislative agenda, it’s time we tackle racism perpetuated by technology, writes Sarah Chander.
By Sarah Chander for EURACTIV.com on September 3, 2020
As the EU formulates its response in its upcoming ‘Action Plan on Racism’, EDRi outlines why it must address structural racism in technology as part of upcoming legislation.
By Sarah Chander for European Digital Rights (EDRi) on September 2, 2020
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.