The Guardian reports that OpenAI’s new AI video generator, Sora 2, launched with a social feed feature that allows users to share their generated videos on social media platforms easily. Predictably, within hours, violent and racist videos generated through Sora flooded these platforms. Despite OpenAI claiming to have implemented safeguards and mitigating measures, the app generated videos depicting mass shootings, bomb scares, and fabricated war footage from Gaza and Myanmar, showing AI-generated children.
The feed also became saturated with copyrighted characters in compromising situations, including SpongeBob SquarePants promoting cryptocurrency scams and even dressed as Adolf Hitler. George Washington University professor David Karpf bluntly assessed the situation: “The guardrails are not real.” He continued:
In 2022, [the tech companies] would have made a big deal about how they were hiring content moderators … In 2025, this is the year that tech companies have decided they don’t give a shit.
Behind the damaging content created by these failing safeguards lies another type of harm: the labour exploitation of data workers in the Global majority who make AI “safety” possible through content moderation and data labelling. Researchers like Milagros MIceli, have documented how this works: often swept under the rug in tech companies’ promotion of “Ethical AI”, the system relies on global extraction and exploitation. These workers, many earning less than $2 per hour in countries like Kenya compared to over $20 in the U.S., face not only poverty wages but severe psychological trauma from reviewing the disturbing content that Sora’s “safeguards” are supposed to prevent.
Despite being invite-only, the app shot to #1 on Apple’s App Store in just three days, surpassing OpenAI’s own ChatGPT. CEO Sam Altman acknowledged “some trepidation” about social media’s addictive nature and potential for bullying, claiming the team implemented safeguards. However, problematic videos have already spread beyond the platform.
Misinformation researchers warn that these lifelike videos “could obfuscate the truth” and enable fraud, manipulation, bullying, and intimidation. Co-author of the book The AI Con, Emily Bender, called synthetic media machines “a scourge on our information ecosystem,” comparing their impact to “an oil spill…weakening and breaking relationships of trust.”
See: OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real at The Guardian.
