The Philippines is one of the countries that has more than two million people perform crowdwork, such as data annotation, according to informal government estimates.
In internet cafes, small rooms in malls, or their homes, many of these workers annotate the masses of data that are needed for tech companies to train their AI models. They label images, edit chunks of texts, differentiate pedestrians from palm trees in videos, so that all the “automated” systems can function and churn out gibberish. Enlisting people in countries that are economically struggling or politically unstable as freelance workers with exploitative labour conditions in the embrace of AI is increasingly being documented.
In this Washington Post article, they interviewed workers, accessed internal company messages, payment records, and financial statements of Remotasks, owned by the U.S. $7 billion San Francisco start-up Scale AI. Scale AI is yet another cliché story of a start-up founded by young college drop outs in the U.S., backed by venture capital, and casts itself as part of the U.S. efforts in the geopolitical race in AI. They work with large technology companies such as Meta, Microsoft and Open AI, and the U.S. Department of Defense. Their vast majority of the workforce is in Asia, Latin America and Africa.
The workers’ experiences in the Philippines echo and reflect the narrative of how tech companies expand and exploit labour in the platform economy. First, workers are promised decent wages and start out earning a significant amount. These wages plunge, as Remotasks expanded to other countries such as India and Venezuela, creating a race to the bottom for wages. Second, labour processes are obfuscated and there are often no redress mechanisms. Once a project is completed, it goes through several reviews before it is evaluated by the Remotasks teams in the U.S, but workers are unclear why their work is approved or rejected, and whether they get paid (in full, or sometimes, nothing at all). Sometimes, workers get deactivated and locked out of their accounts for voicing their concerns, but there are no explanations or complaint mechanisms. Lastly, through Scale AI’s ever-shifting terms and conditions, the company decides on the rules, constantly avoiding and shifting their legal responsibilities. A lack of labour protections in informal economies allow many companies and governments to actively exploit workers in the global system of racial capitalism.
People in parts of the country struggle to find work due to political unrest that have left economic opportunities lacking; conditions which make it ripe for companies to exploit. This story is merely a continuation of a racist, colonial, imperial ideology of the West. The U.S. has a history of violence and of colonising the Philippines, imposing economic, political, and social systems to the benefit of U.S. economic interests. Existing ethical and regulatory AI debates in the West largely remain in the spheres of bias and discrimination, not labour exploitation in global value chains. Labour exploitation in AI development is a feature, not a bug. Genuinely reckoning with exploitative capitalist social relations will require dismantling racial capitalism, but many in power are determined to maintain them.
See: Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ at the Washington Post.
Photo by Martin San Diego from the original Washington Post Article.