Meta, fresh off announcement to end factchecking, follows McDonald’s and Walmart in rolling back diversity initiatives.
By Adria R Walker for The Guardian on January 10, 2025
Meta, fresh off announcement to end factchecking, follows McDonald’s and Walmart in rolling back diversity initiatives.
By Adria R Walker for The Guardian on January 10, 2025
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
By David Gray Widder, Meredith Whittaker, and Sarah Myers West for Nature on November 27, 2024
As we wrote earlier,, tech companies are deeply complicit in the current genocide in Gaza as well as the broader oppression in the occupied Palestinian territories.
Continue reading “Tech workers face retaliation for Palestine solidarity”The Arab Center for the Advancement of Social Media has released a new report titled, “Delete the Issue: Tech Worker Testimonies on Palestinian Advocacy and Workplace Suppression.” The report, the first of its kind, shares testimonies gathered from current and former employees in major technology companies, including Meta, Google, PayPal, Microsoft, LinkedIn, and Cisco. It highlights their experiences supporting Palestinian rights in the workplace and the companies’ efforts to restrict freedom of expression on the matter.
From 7amleh on November 11, 2024
As I write this piece, an Israeli airstrike has hit makeshift tents near Al-Aqsa Martyrs Hospital in Deir al Balah, burning tents and people alive. The Israeli military bombed an aid distribution point in Jabalia, wounding 50 casualties who were waiting for flour. The entire north of Gaza has been besieged by the Israeli Occupying Forces for the past 10 days, trapping 400,000 Palestinians without food, drink, and medical supplies. Every day since last October, Israel, with the help of its western allies, intensifies its assault on Palestine, each time pushing the boundaries of what is comprehensible. There are no moral or legal boundaries Israel, and its allies, will not cross. The systematic ethnic cleansing of Palestine, which has been the basis of the settler-colonial Zionist project since its inception, has accelerated since 7th October 2023. From Palestine to Lebanon, Syria and Yemen, Israel and its allies continue their violence with impunity. Meanwhile, mainstream western news media are either silent in their reporting or complicit in abetting the ongoing destruction of the Palestinian people and the resistance.
Continue reading “Tech companies’ complicity in the ongoing genocide in Gaza and Palestine”Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms—to remove content considered as incitement to violence and terrorism, as well as any promotion of groups widely designated as terrorists.
By Jillian C. York and Paige Collings for Electronic Frontier Foundation (EFF) on July 26, 2024
De helft van alle talen wordt momenteel met uitsterven bedreigd. De Sateré-Mawé in Brazilië willen dit voorkomen door hun taal te digitaliseren. Maar kan dit wel zonder Big Tech? En van wie is de taal eigenlijk?
By Sanne Bloemink for De Groene Amsterdammer on August 21, 2024
Large infrastructure gaps are creating a new digital divide.
From The Economist on July 25, 2024
A little-discussed detail in the Lavender AI article is that Israel is killing people based on being in the same Whatsapp group as a suspected militant. Where are they getting this data? Is WhatsApp sharing it?
By Paul Biggar for Paul Biggar on April 16, 2024
Automated decision-making systems contain hidden discriminatory prejudices. We’ll explain the causes, possible consequences, and the reasons why existing laws do not provide sufficient protection against algorithmic discrimination.
By Pie Sombetzki for AlgorithmWatch on June 26, 2024
Ferras Hamad claims in lawsuit that Meta fired him for trying to fix bugs causing the suppression of Palestinians’ Instagram posts.
From The Guardian on June 5, 2024
Board describes the two videos as important for ‘informing the world about human suffering on both sides’.
By Blake Montgomery for The Guardian on December 19, 2023
The censorship and silencing of Palestinian voices, and voices of those who support Palestine, is not new. However, since the escalation of Israel’s violence on the Gaza strip since 7 October 2023, the scale of censorship has significantly heightened, particular on social media platforms such as Instagram and Facebook. In December 2023, Human Rights Watch (HRW) released a 51-page report*, stating that Meta has engaged in systematic and global censorship of content related to Palestine since October 7th.
Continue reading “Racist Technology in Action: Meta systemically censors and silences Palestinian content globally”By now we know that generative image AI reproduces and amplifies sexism, racism, and other social systems of oppression. The latest example is of AI-generated stickers in WhatsApp that systematically depict Palestinian men and boys with rifles and guns.
Continue reading “Racist Technology in Action: Generative/ing AI Bias”Meta has deployed a new AI system on Facebook and Instagram to fix its algorithmic bias problem for housing ads in the US. But it’s probably more band-aid than AI fairness solution. Gaps in Meta’s compliance report make it difficult to verify if the system is working as intended, which may preview what’s to come from Big Tech compliance reporting in the EU.
By John Albert for AlgorithmWatch on November 17, 2023
By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’.
By Johana Bhuiyan for The Guardian on November 3, 2023
Al jaren censureert Meta de communicatie van Palestijnen en communicatie over de Palestijnse zaak. Toch is dat niet (alleen) een “Big Tech-probleem”. Het beleid van Meta is onder druk van onder andere overheden tot stand gekomen. Diezelfde overheden kiezen er nu voor om Meta niet te bevragen over haar rol in de mogelijke genocide op Palestijnen.
By Evely Austin and Nadia Benaissa for Bits of Freedom on November 3, 2023
Parent company Meta says bug caused ‘inappropriate’ auto-translations and was now fixed while employee says it pushed ‘a lot of people over the edge’.
By Josh Taylor for The Guardian on October 20, 2023
A report commission by Meta — Facebook and Instagram’s parent company — found bias against Palestinians during an Israeli assault last May.
By Sam Biddle for The Intercept on September 21, 2022
Two new papers from Sony and Meta describe novel methods to make bias detection fairer.
By Melissa Heikkilä for MIT Technology Review on September 25, 2023
Wat je in zelflerende AI-systemen stopt, krijg je terug. Technologie, veelal ontwikkeld door witte mannen, versterkt en verbergt daardoor de vooroordelen. Met name vrouwen (van kleur) luiden de alarmbel.
By Marieke Rotman, Nani Jansen Reventlow, Oumaima Hajri and Tanya O’Carroll for De Groene Amsterdammer on July 12, 2023
OpenAI’s contractor workforce helps power ChatGPT through simple interactions. They don’t get benefits, but some say the work is rewarding.
By David Ingram for NBC News on May 6, 2023
A report validated Palestinian experiences of social media censorship in May 2021, but missed how those policies are biased by design.
By Marwa Fatafta for +972 Magazine on October 9, 2022
This is the third time a case has been filed against Meta and sheds light on the harsh reality of content moderation.
By Odanga Madung for Nation on March 20, 2023
Revised online safety bill proposes fines of 10% of revenue but drops harmful communications offence.
By Dan Milmo for The Guardian on November 28, 2022
Galactica language model generated convincing text about fact and nonsense alike.
By Benj Edwards for Ars Technica on November 18, 2022
I asked it to write about linguistic prejudice.
By Rikker Dockum for Twitter on November 16, 2022
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
By Will Douglas Heaven for MIT Technology Review on November 18, 2022
In his New York Times article, Mike Isaac describes how Meta is implementing a new system to automatically check whether the housing, employment and credit ads it hosts are shown to people equally. This is a move following a 111,054 US dollar fine the US Justice Department has issued Meta because its ad systems have been shown to discriminate its users by, amongst other things, excluding black people from seeing certain housing ads in predominately white neighbourhoods. This is the outcome of a long process, which we have written about previously.
Continue reading “Meta forced to change its advertisement algorithm to address algorithmic discrimination”In 2019, a Facebook content moderator in Nairobi, Daniel Motaung, who was paid USD 2.20 per hour, was fired. He was working for one of Meta’s largest outsourcing partners in Africa, Sama, which brands itself as an “ethical AI” outsourcing company, and is headquartered in California. Motaung led a unionisation attempt with more than 100 colleagues, fighting for better wages and working conditions.
Continue reading “Exploited and silenced: Meta’s Black whistleblower in Nairobi”A Facebook lawyer called on a judge to “crack the whip” against a whistleblower who accuses the company of forced labor and human trafficking.
By Billy Perrigo for Time on July 1, 2022
The Justice Department had accused Meta’s housing advertising system of discriminating against Facebook users based on their race, gender, religion and other characteristics.
By Mike Isaac for The New York Times on June 21, 2022
Around 2016 Facebook was still proud of its ability to target to “Black affinity” and “White affinity” adiences for the ads of their customers. I then wrote an op-ed decrying this form of racial profiling that was enabled by Facebook’s data lust.
Continue reading “Facebook has finally stopped enabling racial profiling for targeted advertising”Online ad-targeting practices often reflect and replicate existing disparities, effectively locking out marginalized groups from housing, job, and credit opportunities.
By Linda Morris and Olga Akselrod for American Civil Liberties Union (ACLU) on January 27, 2022
Hiring sociocultural workers to correct bias overlooks the limitations of these underappreciated fields.
By Elena Maris for WIRED on January 12, 2022
We must curb the power of Silicon Valley and protect those who speak up about the harms of AI.
By Timnit Gebru for The Guardian on December 6, 2021
Over the past months a slew of leaks from the Facebook whistleblower, Frances Haugen, has exposed how the company was aware of the disparate and harmful impact of its content moderation practices. Most damning is that in the majority of instances, Facebook failed to address these harms. In this Washington Post piece, one of the latest of such revelations is discussed in detail: Even though Facebook knew it would come at the expense of Black users, its algorithm to detect and remove hate speech was programmed to be ‘race-blind’.
Continue reading “‘Race-blind’ content moderation disadvantages Black users”Despite Biden’s announced commitment to advancing racial justice, not a single appointee to the task force has focused experience on civil rights and liberties in the development and use of AI. That has to change. Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.
By ReNika Moore for Washington Post on August 9, 2021
Researchers proposed a fix to the biased algorithm, but one internal document predicted pushback from ‘conservative partners’.
By Craig Timberg, Elizabeth Dwoskin and Nitasha Tiku for Washington Post on November 21, 2021
Facebook Inc said on Tuesday it plans to remove detailed ad-targeting options that refer to “sensitive” topics, such as ads based on interactions with content around race, health, religious practices, political beliefs or sexual orientation.
By Elizabeth Culliford for Reuters on November 9, 2021
Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.