In previous newsletters, we have discussed how facial recognition algorithms affect racialised people. However, another community heavily affected by the large-scale use of facial recognition tools are people with facial differences.
When Autumn Gardiner tried to update her driver’s license, her photo kept being rejected while everyone at the DMV was watching. Autumn has a facial difference due to living with Whistling Face syndrome. She recalls: “It was humiliating and weird. Here’s this machine telling me that I don’t have a human face.” People with facial differences already face discrimination and stigma, outside of the technological interferences, which now heavily exacerbate these impacts. So, for over 100 million people worldwide living with some form of facial difference (birthmarks to craniofacial conditions), participating in ‘modern’ society now becomes increasingly more difficult as algorithms and automations depend on white, conventional, and male norms or training data. On top of the algorithm’s exclusion, they also have no clear entity to ask for help or hold accountable for this level of discrimination.
Kathleen Bogart, a psychology professor at Oregon State University who specialises in disability research and lives with a facial difference, claims the solution to this type of discrimination is to include more people with facial differences and disabilities in the software’s design and development. However, Autumn Gardiner also calls for proper protocols with clear lines of responsibility wherever facial recognition is implemented: “What do humans do when the AI doesn’t work?” And she is right. At the end of the day, no matter who gets disproportionately discriminated against, racialised, or targeted by automated tools, we have to continue pushing and hold accountable those entities for damaging people’s lives.
See: When Face Recognition Doesn’t Know Your Face Is a Face at Wired.
Image from the original Wired article.
