Racist and classist predictive policing exists in Europe too

The enduring idea that technology will be able to solve many of the existing problems in society continues to permeate across governments. For the EUObserver, Fieke Jansen and Sarah Chander illustrate some of the problematic and harmful uses of ‘predictive’ algorithmic systems by states and public authorities across the UK and Europe.

Continue reading “Racist and classist predictive policing exists in Europe too”

The right to repair our devices is also a social justice issue

Over the past couple of years, devices like our phones have become much harder to repair, and unauthorized repair often leads to a loss of warranty. This is partially driven by our manufactured need for devices that are slimmer and slicker, but is mostly an explicit strategy to make us throw away our old devices and have us buy new ones.

Continue reading “The right to repair our devices is also a social justice issue”

At the mercy of the TikTok algorithm?

In this article for the Markup, Dara Kerr offers an interesting insight in the plight of TikTok’ers who try to earn a living on the platform. TikTok’s algorithm, or how it decides what content gets a lot of exposure, is notoriously vague. With ever changing policies and metrics, Kerr recounts how difficult it is to build up and retain a following on the platform. This vagueness does not only create difficulty for creators trying to monetize their content, but also leaves more room for TikTok to suppress or spread content at will.

Continue reading “At the mercy of the TikTok algorithm?”

Rotterdam’s use of algorithms could lead to ethnic profiling

The Rekenkamer Rotterdam (a Court of Audit) looked at how the city of Rotterdam is using predictive algorithms and whether that use could lead to ethical problems. In their report, they describe how the city lacks a proper overview of the algorithms that it is using, how there is no coordination and thus no one takes responsibility when things go wrong, and how sensitive data (like nationality) were not used by one particular fraud detection algorithm, but that so-called proxy variables for ethnicity – like low literacy, which might correlate with ethnicity – were still part of the calculations. According to the Rekenkamer this could lead to unfair treatment, or as we would call it: ethnic profiling.

Continue reading “Rotterdam’s use of algorithms could lead to ethnic profiling”

Google blocks advertisers from targeting Black Lives Matter

In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.

Continue reading “Google blocks advertisers from targeting Black Lives Matter”

Online hate and harassment continue to proliferate

A recent report by ADL, an anti-hate organisation in the US, has shown that social media platforms have consistently failed to prevent online hate and harassment. Despite the self-regulatory efforts made by social media companies, results from ADL’s annual survey shows that the level of online hate and harassment has barely shifted in the past three years. These online experiences disproportionately harm marginalised groups, with LGBTQI+, Asian-American, Jewish and African-American respondents reporting higher rates of various forms of harassment. Many of these problems are intrinsic to the ways in which the business models of social media platforms are optimised for maximum engagement, further exacerbating existing issues in society.

Continue reading “Online hate and harassment continue to proliferate”

Racist Technology in Action: Amazon’s racist facial ‘Rekognition’

An already infamous example of racist technology is Amazon’s facial recognition system ‘Rekognition’ that had an enormous racial and gender bias. Researcher and founder of the Algorithmic Justice League Joy Buolawini (the ‘poet of code‘), together with Deborah Raji, meticulously reconstructed how accurate Rekognition was in identifying different types of faces. Buolawini and Raji’s study has been extremely consequencial in laying bare the racism and sexism in these facial recognition systems and was featured in the popular Coded Bias documentary.

Continue reading “Racist Technology in Action: Amazon’s racist facial ‘Rekognition’”

Online proctoring excludes and discriminates

The use of software to automatically detect cheating on online exams – online proctoring – has been the go-to solution for many schools and universities in response to the COVID-19 pandemic. In this article, Shea Swauger addresses some of the potential discriminatory, privacy and security harms that can impact groups of students across class, gender, race, and disability lines. Swauger provides a critique on how technologies encode “normal” bodies – cisgender, white, able-bodied, neurotypical, male – as the standard and how students who do not (or cannot) conform, are punished by it.

Continue reading “Online proctoring excludes and discriminates”

Filtering out the “Asians”

The article’s title speaks for itself, “Your iPhone’s Adult Content Filter Blocks Anything ‘Asian’”. Victoria Song has tested the claims made by The Independent: if you enable the “Limit Adult Websites” function in your iPhone’s Screen Time setting, then you are blocked from seeing any Google search results for “Asian”. Related searches such as “Asian recipes,” or “Southeast Asian,” are also blocked by the adult content filter. There is no clarity or transparency to how search terms are considered adult content or not, and whether the process is automated or done manually. Regardless of intention, the outcome and the lack of action by Google or Apple is unsurprising but disconcerting. It is far from a mistake, but rather, a feature of their commercial practices and their disregard to the social harms of their business model.

Continue reading “Filtering out the “Asians””

The Dutch government’s love affair with ethnic profiling

In his article for One World, Florentijn van Rootselaar shows how the Dutch government uses automated systems to profile certain groups based on their ethnicity. He uses several examples to expose how, even though Western countries are often quick to denounce China’s use of technology to surveil, profile and oppress the Uighurs, the same states themselves use or contribute to the development of similar technologies.

Continue reading “The Dutch government’s love affair with ethnic profiling”

The internet doesn’t have ‘universal’ users

Since 2017, Mozilla – the makers of the Firefox browser – have written a yearly report on the health of the internet. This year’s report focuses on labor rights, transparency and racial justice. The piece about racial justice makes an interesting argument about how the sites we see on the first page of a search engine are a reflection of the general popularity of these sites or their ability to pay for a top result. This leads to a ‘mainstream’ bias.

Continue reading “The internet doesn’t have ‘universal’ users”

Racist technology in action: Gun, or electronic device?

The answer to that question depends on your skin colour, apparently. An AlgorithmWatch reporter, Nicholas Kayser-Bril, conducted an experiment that went viral on Twitter, showing that Google Vision Cloud (a service which is based on a subset of AI known as “computer vision” that focuses on automated image labelling), labelled an image of a dark-skinned individual holding a thermometer with the word “gun”, whilst a lighter skinned individual was labelled holding an “electronic device”.

Continue reading “Racist technology in action: Gun, or electronic device?”

Corporatespeak and racial injustice

In light of the Black Lives Matter protests in the U.S. and protests against police brutality in Europe, technology companies have been quick to release corporate statements, commitments, campaigns and initiatives to tackle discrimination and racial injustice. Amber Hamilton evaluated 63 public facing documents from major technology companies such as Facebook, Instagram, Twitter, YouTube, Airbnb and TikTok.

Continue reading “Corporatespeak and racial injustice”

Google fires AI researcher Timnit Gebru

Google has fired AI researcher and ethicist Timnit Gebru after she wrote an email criticising Google’s policies around diversity while she struggled with her leadership to get a critical paper on AI published. This angered thousands of her former colleagues and academics. They pointed at the unequal treatment that Gebru received as a black woman and they were worried about the integrity of Google’s research.

Continue reading “Google fires AI researcher Timnit Gebru”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑