Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
By Paresh Dave for WIRED on March 7, 2024
Researchers found that certain prejudices also worsened as models grew larger.
By James O’Donnell for MIT Technology Review on March 11, 2024
You have probably heard that Google had to suspend it’s Gemini image feature after showing people black Nazis and female popes. Well I have a simple explanation for what happened here. Namely, the folks at Google wanted to avoid an embarrassment that they’d been involved with multiple times, and seen others get involved with, namely the “pale male” dataset problem, that happens especially at tech companies dominated by white men, and ironically, especially especially at tech companies dominated by white men who are careful about privacy, because then they only collect pictures of people who give consent, which is typically people who work there! See for example this webpage, or Safiya Noble’s entire book.
By Cathy O’Neil for mathbabe on March 12, 2024
An explanation of how the issues with Gemini’s image generation of people happened, and what we’re doing to fix it.
By Prabhakar Raghavan for The Keyword on February 23, 2024
It’s hard to keep a stereotyping machine out of trouble.
By Russell Brandom for Rest of World on February 29, 2024
In a shallow attempt to do representation for representation’s sake, Google has managed to draw the ire of the right-wing internet by generating historically inaccurate and overly inclusive portraits of historical figures.
Continue reading “Google does performative identity politics, nonpologises, pauses their efforts, and will invariably move on to its next shitty moneymaking move”Google wants to “help computers ‘see’ our world”, and one of their ways of battling how current AI and machine learning systems perpetuate biases is to introduce a more inclusive scale of skin tone, the ‘Monk Skin Tone Scale’.
Continue reading “Representing skin tone, or Google’s hubris versus the simplicity of Crayola”If this title feels like a deja-vu it is because you most likely have, in fact, seen this before (perhaps even in our newsletter). It was back in 2015 that the controversy first arose when Google released image recognition software that kept mislabelling Black people as gorillas (read here and here).
Continue reading “Racist Technology in Action: Image recognition is still not capable of differentiating gorillas from Black people”Eight years after a controversy over Black people being mislabeled as gorillas by image analysis software — and despite big advances in computer vision — tech giants still fear repeating the mistake.
By Kashmir Hill and Nico Grant for The New York Times on May 22, 2023
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination.
By John Harris for The Guardian on May 22, 2023
Introducing the Monk Skin Tone (MST) Scale, one of the ways we are moving AI forward with more inclusive computer vision tools.
From Skin Tone at Google
Skin tone is an observable characteristic that is subjective, perceived differently by individuals (e.g., depending on their location or culture) and thus is complicated to annotate. That said, the ability to reliably and accurately annotate skin tone is highly important in computer vision. This became apparent in 2018, when the Gender Shades study highlighted that computer vision systems struggled to detect people with darker skin tones, and performed particularly poorly for women with darker skin tones. The study highlights the importance for computer researchers and practitioners to evaluate their technologies across the full range of skin tones and at intersections of identities. Beyond evaluating model performance on skin tone, skin tone annotations enable researchers to measure diversity and representation in image retrieval systems, dataset collection, and image generation. For all of these applications, a collection of meaningful and inclusive skin tone annotations is key.
By Candice Schumann and Gbolahan O. Olanubi for Google AI Blog on May 15, 2023
The former Googler and current Signal president on why she thinks Geoffrey Hinton’s alarmism is a distraction from more pressing threats.
By Meredith Whittaker and Wilfred Chan for Fast Company on May 5, 2023
Buskruit was een geweldig slimme uitvinding en kent goede én slechte toepassingen. Zullen we later op dezelfde manier naar kunstmatige intelligentie kijken?
By Claes de Vreese, Hind Dekker-Abdulaziz, Ilyaz Nasrullah, Martijn Bertisen, Nienke Schipper and Oumaima Hajri for Trouw on May 2, 2023
You are not a parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.
By Elizabeth Weil and Emily M. Bender for New York Magazine on March 1, 2023
Graduates from the Indian Institutes of Technology are highly sought after by employers. They can also bring problems from home.
By Saritha Rai for Bloomberg on March 11, 2021
Predictive language technologies – such as Google Search’s Autocomplete – constitute forms of algorithmic power that reflect and compound global power imbalances between Western technology companies and multilingual Internet users in the global South. Increasing attention is being paid to predictive language technologies and their impacts on individual users and public discourse. However, there is a lack of scholarship on how such technologies interact with African languages. Addressing this gap, the article presents data from experimentation with autocomplete predictions/suggestions for gendered or politicised keywords in Amharic, Kiswahili and Somali. It demonstrates that autocomplete functions for these languages and how users may be exposed to harmful content due to an apparent lack of filtering of problematic ‘predictions’. Drawing on debates on algorithmic power and digital colonialism, the article demonstrates that global power imbalances manifest here not through a lack of online African indigenous language content, but rather in regard to the moderation of content across diverse cultural and linguistic contexts. This raises dilemmas for actors invested in the multilingual Internet between risks of digital surveillance and effective platform oversight, which could prevent algorithmic harms to users engaging with platforms in a myriad of languages and diverse socio-cultural and political environments.
By Peter Chonka, Stephanie Diepeveen and Yidnekachew Haile for SAGE Journals on June 22, 2022
The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns.
By Kenan Malik for The Guardian on June 19, 2022
We are happy to see that more and more attention is being paid to how technology intersects with problems around (racial) justice. Recently two new initiatives have launched that we would like to highlight.
Continue reading “Two new technology initiatives focused on (racial) justice”We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
Timnit Gebru is launching Distributed Artificial Intelligence Research Institute (DAIR) to document AI’s harms on marginalized groups.
By Nitasha Tiku for Washington Post on December 2, 2021
Big tech relies on the victims of economic collapse.
By Phil Jones for Rest of World on September 22, 2021
Twitter outrage over image search results of black and white teens is misdirected. We must address the prejudice that feeds such negative portrayals.
By Antoine Allen for The Guardian on June 10, 2016
Papa, mag ik die huidskleur?’ Verbaasd keek ik op van de kleurplaat die ik aan het inkleuren was, om mijn dochter te zien wijzen naar een stift met een perzikachtige kleur. Of misschien had die meer de kleur van een abrikoos. Afijn, de stift had in ieder geval niet háár huidskleur. Mijn dochter mag dan wel twee tinten lichter van kleur zijn dan ik, toch is zij overduidelijk bruin.
By Ilyaz Nasrulla for Trouw on September 23, 2021
Back in 2013, Harvard professor Latanya Sweeney was one of the first people to demonstrate racism (she called it ‘discrimination’) in online algorithms. She did this with her research on the ad delivery practices of Google.
Continue reading “Racist Technology in Action: Racist search engine ads”The Plug and Fast Company looked at what happened to the 3.8 billion dollars that US-based tech companies committed to diversity, equity, and inclusion as their response to the Black Lives Matter protests.
Continue reading “Tech companies poured 3.8 billion USD into racial justice, but to what avail?”For years, Big Tech has set the global AI research agenda. Now, groups like Black in AI and Queer in AI are upending the field’s power dynamics to build AI that serves people.
By Karen Hao for MIT Technology Review on June 14, 2021
The following short video by Vox shows how white skin has always been the norm in photography. Black people didn’t start to look good on film until in the 1970s furniture makers complained to Kodak that their film didn’t render the difference between dark and light grained wood, and chocolate companies were upset that you couldn’t see the difference between dark and light chocolate.
Continue reading “Long overdue: Google has improved its camera app to work better for Black people”Pictures are deeply personal and play an important role in shaping how people see you and how you see yourself. But historical biases in the medium of photography have carried through to some of today’s camera technologies, leading to tools that haven’t seen people of color as they want and ought to be seen.
From YouTube on May 18, 2021
Attempt to tackle racial bias long overdue say practitioners, but it’s not just about the equipment.
By Aamna Mohdin for The Guardian on May 28, 2021
Automated systems from Apple and Google label characters with dark skins “Animals”.
By Nicolas Kayser-Bril for AlgorithmWatch on May 14, 2021
From Siri, to Alexa, to Google Now, voice-based virtual assistants have increasingly become ubiquitous in our daily lives. So, it is unsurprising that yet another AI technology – speech recognition systems – has been reported to be biased against black people.
Continue reading “Racist Technology in Action: Speech recognition systems by major tech companies are biased”Color of Change petition calls Google’s block on advertisers searching for social justice content “unacceptable”.
By Leon Yin for The Markup on May 4, 2021
They said Google’s decision to block advertisers from seeing “Black Lives Matter” and other social justice YouTube videos was the last straw.
By Aaron Sankin for The Markup on April 20, 2021
The company is considering how its use of machine learning may reinforce existing biases.
By Anna Kramer for Protocol on April 14, 2021
In this piece for Markup, Leon Yin and Aaron Sankin expose how Google bans advertisers from targeting terms such as “Black lives matter”, “antifascist” or “Muslim fashion”. At the same time, keywords such as “White lives matter” or “Christian fashion” are not banned. When they raised this striking discrepancy with Google, its response was to fix the discrepancies between religions and races by blocking all such terms, as well as by blocking even more social justice related keywords such as “I can’t breathe” or “LGBTQ”. Blocking these terms for ad placement can reduce the revenue for YouTuber’s fighting for these causes. Yin and Sankin place this policy in stark contrast to Google’s support for the Black Lives Matter movement.
Continue reading “Google blocks advertisers from targeting Black Lives Matter”For a Markup feature, Leon Yin and Aaron Sankin compiled a list of “social and racial justice terms” with help from Color of Change, Media Justice, Mijente and Muslim Advocates, then checked if YouTube would let them target those terms for ads.
By Cory Doctorow for Pluralistic on April 10, 2021
“Black power” and “Black Lives Matter” can’t be used to find videos for ads, but “White power” and “White lives matter” were just fine.
By Aaron Sankin and Leon Yin for The Markup on April 9, 2021
In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.
By Charlton McIlwain for Logic on December 20, 2021
The article’s title speaks for itself, “Your iPhone’s Adult Content Filter Blocks Anything ‘Asian’”. Victoria Song has tested the claims made by The Independent: if you enable the “Limit Adult Websites” function in your iPhone’s Screen Time setting, then you are blocked from seeing any Google search results for “Asian”. Related searches such as “Asian recipes,” or “Southeast Asian,” are also blocked by the adult content filter. There is no clarity or transparency to how search terms are considered adult content or not, and whether the process is automated or done manually. Regardless of intention, the outcome and the lack of action by Google or Apple is unsurprising but disconcerting. It is far from a mistake, but rather, a feature of their commercial practices and their disregard to the social harms of their business model.
Continue reading “Filtering out the “Asians””Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.