We have written about the racist cropping algorithm that Twitter uses, and have shared how Twitter tried to fix the symptoms. Twitter also instituted an ‘algorithmic bug bounty’, asking researchers to prove bias in their algorithms.
That is exactly what the researchers did as you can see in this YouTube video from the DEF CON hacking conference:
Winner of the prize was Bogdan Kulynych who proved Twitter’s bias by creating artificial faces with minor differences and then looking at the ‘saliency’ score that Twitter would give these images. One of his conclusions was that the algorithm thought that light skin tones were more interesting than dark ones:
Kulynych was appreciative of the award, but he was also critical:
Algorithmic harms are not only ‘bugs’. Crucially, a lot of harmful tech is harmful not because of accidents, unintended mistakes, but rather by design. This comes from maximisation of engagement and, in general, profit externalising the costs to others. As an example, amplifying gentrification, driving down wages, spreading clickbait and misinformation are not necessarily due to ‘biased’ algorithms.
Algorithm Watch thinks that Twitter’s bug bounty program is “an unprecedented experiment in openness”, but laments how much less money Twitter invests in algorithmic bug bounties in comparison to bug bounties related to security. This is because companies are heavily regulated on security, whereas regulators battling discrimination are underfunded. A bug bounty like Twitter’s makes it clear that governments need to invest more in their algorithmic auditing capability.
See: PhD Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger faces at the Guardian.