Racist Techology in Action: Beauty is in the eye of the AI

Where people’s notion of beauty is often steeped in cultural preferences or plain prejudice, the objectivity of an AI-system would surely allow it to access a more universal conception of beauty – or so thought the developers of Beauty.AI. Alex Zhavoronkov, who consulted in the development of the Beaut.AI-system, described the dystopian motivation behind the system clearly: “Humans are generally biased and there needs to be a robot to provide an impartial opinion. Beauty.AI is the first step in a much larger story, in which a mobile app trained to evaluate perception of human appearance will evolve into a caring personal assistant to help users look their best and retain their youthful looks.”

Continue reading “Racist Techology in Action: Beauty is in the eye of the AI”

The Dutch government wants to continue to spy on activists’ social media

Investigative journalism of the NRC brought to light that the Dutch NCTV (the National Coordinator for Counterterrorism and Security) uses fake social media accounts to track Dutch activists. The agency also targets activists working in the social justice or anti-discrimination space and tracks their work, sentiments and movements through their social media accounts. This is a clear example of how digital communication allows governments to intensify their surveillance and criminalisation of political opinions outside the mainstream.

Continue reading “The Dutch government wants to continue to spy on activists’ social media”

Silencing Black women in tech journalism

In this op-ed, Sydette Harry unpacks how the tech sector, particularly tech journalism, has largely failed to meaningfully listen and account for the experiences of Black women, a group that most often bears the brunt of the harmful and racist effects of technological “innovations”. While the role of tech journalism is supposedly to hold the tech industry accountable through access and insight, it has repeatedly failed to include Black people in their reporting, neither by hiring Black writers nor by addressing them seriously as an audience. Rather, their experiences and culture are often co-opted, silenced, unreported, and pushed out of newsrooms.

Continue reading “Silencing Black women in tech journalism”

Don’t miss this 4-part journalism series on ‘AI Colonialism’

The MIT Technology Review has written a four-part series on how the impact of AI is “repeating the patterns of colonial history.” The Review is careful not to directly compare the current situation with the colonialist capturing of land, extraction of resources, and exploitation of people. Yet, they clearly show that AI does further enrich the wealthy at the tremendous expense of the poor.

Continue reading “Don’t miss this 4-part journalism series on ‘AI Colonialism’”

Exploitative labour is central to the infrastructure of AI

In this piece, Julian Posada writes about a family of five in Venezuela, who synchronise their routines so that there will always be two people at the computer working for a crowdsourcing platform to make a living. They earn a few cents per task in a cryptocurrency and are only allowed to cash out once they’ve made at least the equivalent of USD 10. On average they earn about USD 20 per week, but their earnings can be erratic, resulting in extreme stress and precarity.

Continue reading “Exploitative labour is central to the infrastructure of AI”

Inventing language to avoid algorithmic censorship

Platforms like Tiktok, Twitch and Instagram use algorithmic filters to automatically block certain posts on the basis of the language they use. The Washington Post shows how this has created ‘algospeak’, a whole new vocabulary. So instead of ‘dead’ users write ‘unalive’, they use ‘SA’ instead of ‘sexual assault’, and write ‘spicy eggplant’ rather than ‘vibrator’.

Continue reading “Inventing language to avoid algorithmic censorship”

Racism and technology in the Dutch municipal elections

Last week in the Netherlands all focus was on the municipal elections. Last Wednesday, the city councils were chosen that will govern for the next four years. The elections this year were mainly characterised by a historical low turnout and the traditional overall wins for local parties. However, the focus of the Racism and Technology Center is, of course, on whether the new municipal councils and governments will put issues on the intersection of social justice and technology on the agenda.

Continue reading “Racism and technology in the Dutch municipal elections”

Disinformation and anti-Blackness

In this issue of Logic, issue editor, J. Khadijah Abdurahman and André Brock Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures converse about the history of disinformation from reconstruction to the present, and discuss “the unholy trinity of whiteness, modernity, and capitalism”.

Continue reading “Disinformation and anti-Blackness”

Centering social injustice, de-centering tech

The Racism and Technology Center organised a panel titled Centering social injustice, de-centering tech: The case of the Dutch child benefits scandal and beyond at Privacy Camp 2022, a conference that brings together digital rights advocates, activists, academics and policymakers. Together with Merel Koning (Amnesty International), Nadia Benaissa (Bits of Freedom) and Sanne Stevens (Justice, Equity and Technology Table), the discussion used the Dutch child benefits scandal as an example to highlight issues of deeply rooted racism and discrimination in the public sector. The fixation on algorithms and automated decision-making systems tends to obscure these fundamental problems. Often, the use of technology by governments functions to normalise and rationalise existing racist and classist practices.

Continue reading “Centering social injustice, de-centering tech”

Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms

In an official parliamentary investigative committee, the Dutch Senate is investigating how new regulation or law-making processes can help combat discrimination in the Netherlands. The focus of the investigative committee is on four broad domains: labour market, education, social security and policing. As a part of these wide investigative efforts the senate is hearing from a range of experts and civil society organisations. Most notably, one contribution stands out from the perspective of racist technology: Nadia Benaissa from Bits of Freedom highlighted the dangers of predictive policing and other uses of automated systems in law enforcement.

Continue reading “Bits of Freedom speaks to the Dutch Senate on discriminatory algorithms”

Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact

Traffic cameras that are used to automatically hand out speeding tickets don’t look at the colour of the person driving the speeding car. Yet, ProPublica has convincingly shown how cameras that don’t have a racial bias can still have a disparate racial impact.

Continue reading “Racist Technology in Action: “Race-neutral” traffic cameras have a racially disparate impact”

Nani Jansen Reventlow receives Dutch prize for championing privacy and digital rights

The Dutch digital rights NGO Bits of Freedom has awarded Nani Jansen Reventlow the “Felipe Rodriguez Award” for her outstanding work championing digital rights and her crucial efforts in decolonising the field. In this (Dutch language) podcast she is interviewed by Bits of Freedom’s Inge Wannet about her strategic litigation work and her ongoing fight to decolonise the digital rights field.

Continue reading “Nani Jansen Reventlow receives Dutch prize for championing privacy and digital rights”

Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success

An investigation by The Markup in March 2021, revealed that some universities in the U.S. are using a software and risk algorithm that uses the race of student as one of the factors to predict and evaluate how successful a student may be. Several universities have described race as a “high impact predictor”. The investigation found large disparities in how the software treated students of different races, with Black students deemed a four times higher risk than their White peers.

Continue reading “Racist Technology in Action: U.S. universities using race in their risk algorithms as a predictor for student success”

Predictive policing reinforces and accelerates racial bias

The Markup and Gizmodo, in a recent investigative piece, analysed 5.9 million crime predictions by PredPol, crime prediction software used by law enforcement agencies in the U.S. The results confirm the racist logics and impact driven by predictive policing on individuals and neighbourhoods. As compared to Whiter, middle- and upper-income neighbourhoods, Black, Latino and poor neighbourhoods were relentlessly targeted by the software, which recommended increased police presence. The fewer White residents who lived in an area – and the more Black and Latino residents who lived there – the more likely PredPol would predict a crime there. Some neighbourhoods, in their dataset, were the subject of more than 11,000 predictions.

Continue reading “Predictive policing reinforces and accelerates racial bias”

Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers

This example of racist technology in action combines racist facial recognition systems with exploitative working conditions and algorithmic management to produce a perfect example of how technology can exacarbate both economic precarity and racial discrimination.

Continue reading “Racist Technology in Action: Uber’s racially discriminatory facial recognition system firing workers”

‘Race-blind’ content moderation disadvantages Black users

Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.

Continue reading “‘Race-blind’ content moderation disadvantages Black users”

Dutch Scientific Council knows: AI is neither neutral nor always rational

AI should be seen as a new system technology, according to The Netherlands Scientific Council for Government Policy, meaning that its impact is large, affects the whole of society, and is hard to predict. In their new Mission AI report, the Council lists five challenges for successfully embedding system technologies in society, leading to ten recommendations for governments.

Continue reading “Dutch Scientific Council knows: AI is neither neutral nor always rational”

Amnesty’s grim warning against another ‘Toeslagenaffaire’

In its report of the 25 of October, Amnesty slams the Dutch government’s use of discriminatory algorithms in the child benefits schandal (toeslagenaffaire) and warns that the likelihood of such a scandal occurring again is very high. The report is aptly titled ‘Xenophobic machines – Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal’ and it conducts a human rights analysis of a specific sub-element of the scandal: the use of algorithms and risk models. The report is based on the report of the Dutch data protection authority and several other government reports.

Continue reading “Amnesty’s grim warning against another ‘Toeslagenaffaire’”

Digital Rights for All: harmed communities should be front and centre

Earlier this month, Digital Freedom Fund kicked off a series of online workshops of the ‘Digital Rights for All’ programme. In this post, Laurence Meyer details the reasons for this initiative with the fundamental aim of addressing why individuals and communities most affected by the harms of technologies are not centred in the advocacy, policy, and strategic litigation work on digital rights in Europe, and how to tackle challenges around funding, sustainable collaborations and language barriers.

Continue reading “Digital Rights for All: harmed communities should be front and centre”

Racist Technology in Action: Facebook labels black men as ‘primates’

In the reckoning of the Black Lives Matter movement in summer 2020, a video that featured black men in altercation with the police and white civilians was posted by the Daily Mail, a British tabloid. In the New York Times, Ryan Mac reports how Facebook users who watched that video, saw an automated prompt that asked if they would like to “keep seeing videos about Primates,” despite there being no relatedness to primates or monkeys.

Continue reading “Racist Technology in Action: Facebook labels black men as ‘primates’”

Big Tech is propped up by a globally exploited workforce

Behind the promise of automation, advances of machine learning and AI, often paraded by tech companies like Amazon, Google, Facebook and Tesla, lies a deeply exploitative industry of cheap, human labour. In an excerpt published on Rest of the World from his forthcoming book, “Work Without the Worker: Labour in the Age of Platform Capitalism,” Phil Jones illustrates how the hidden labour of automation is outsourced to marginalised, racialised and disenfranchised populations within the Global North, as well as in the Global South.

Continue reading “Big Tech is propped up by a globally exploited workforce”

Photo filters are keeping colorism alive

Many people use filters on social media to ‘beautify’ their pictures. In this article, Tate Ryan-Mosley discusses how these beauty filters can perpetuate colorism. Colorism has a long and complicated history, but can be summarised as a preference for whiter skin as opposed to darker skin. Ryan-Mosley explains that “though related to racism, it’s distinct in that it can affect people regardless of their race, and can have different effects on people of the same background.” The harmful effects of colorism, ranging from discrimination to mental health issues or the use of toxic skin-lightening products, are found across races and cultures.

Continue reading “Photo filters are keeping colorism alive”

Racist Technology in Action: White preference in mortage-approval algorithms

A very clear example of racist technology was exposed by Emmanuel Martinez and Lauren Kirchner in an article for the Markup. Algorithms used by a variety of American banks and lenders to automatically assess or advice on mortgages display clear racial disparity. In national data from the United States in 2019 they found that “loan applicants of color were 40%–80% more likely to be denied than their White counterparts. In certain metro areas, the disparity was greater than 250%.”

Continue reading “Racist Technology in Action: White preference in mortage-approval algorithms”

Government: Stop using discriminatory algorithms

In her Volkskrant opinion piece Nani Jansen Reventlow makes a forceful argument for the government to stop using algorithms that lead to discrimination and exclusion. Reventlow, director of the Digital Freedom Fund, employs a myriad of examples to show how disregarding the social nature of technological systems can lead to reproducing existing social injustices such as racism or discrimination. The automatic fraud detection system SyRI that was ruled in violation of fundamental rights (and its dangerous successor Super SyRI) is discussed, as well as the racist proctoring software we wrote about earlier.

Continue reading “Government: Stop using discriminatory algorithms”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑