New York City uses a secret Child Welfare Algorithm

New York City’s Administration for Children’s Services (ACS) has been secretly using an AI risk assessment system since 2018 to flag families for additional investigation. This Markup investigation reveals how this algorithm mainly affects families of colour and raises serious questions about algorithmic bias against racialised and poor families in child welfare.

The AI uses 279 variables to score families, including factors beyond parents’ control, such as neighbourhood, mother’s age, number of siblings, mental health history, and past involvement with the ACS. High-scoring families face additional home visits and investigations.

In NYC, Black families are reported to child services at seven times the rate of white families and are 13 times more likely to have children removed. While the algorithm doesn’t explicitly use race, it relies on neighbourhood and other factors that serve as proxies for race and poverty.

It seems this system is a good example of two intersecting forms of oppression: the criminalisation of poverty in how child welfare systems tend to punish poor families and racism in how they discriminate by disproportionally targeting families of colour.

As Karlena Hamblin, a mother targeted by the algorithm, puts it:

My neighborhood alone makes me more likely to be abusive or neglectful? That’s because we look at poverty as neglect and the neighborhoods they identify have very low resources.”

Additionally, the Markup’s reporting highlights the dangers of a lack of oversight and transparency. Families never know when the algorithm flags their case. Parents, their lawyers, and even caseworkers aren’t told when AI influences a decision. ACS’s internal audit acknowledges the system includes “implicit and systemic biases” but concluded it’s still more accurate than previous methods.

This New York City risk assessment system echoes similar systems used in the Netherlands, such as the “Systeem Risk Indicatie and the welfare fraud risk assessment system used in Rotterdam. Both systems were shown to have significant issues with targeting and stigmatising specific populations, such as concerning gender and poverty.

See The NYC Algorithm Deciding Which Families Are Under Watch for Child Abuse at The Markup.

Image by Adriana Heldiz from the original The Markup article.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑