AI Algorithm Raises Concerns for Child Protective Services

AI algorithm may create racial disparities for child protective services agency

Image: Associated Press (AP)

An artificial intelligence tool used by a child protective services agency in Pittsburgh is under scrutiny for its alleged racial discrimination against families. The Associated Press first reported on the concerns following an investigation revealing transparency issues and potential bias caused by the AI algorithms used in the child welfare system.

The Pittsburgh-based child welfare agency uses the Allegheny Family Screening Tool which helps social workers who are overloaded with work to better determine which families need to be investigated based on that family’s risk level. The AP first revealed its findings in April of last year, saying the AI system had the potential to widen racial disparities in which a pattern of flagging an overwhelming number of Black children compared to white children for a “mandatory” neglect investigation.

Research conducted by Carnegie Mellon University found that about one-third of the time, social workers disagreed with the risk scores determined by the AI’s algorithm, but county officials told the outlet that the research is “hypothetical” and social workers can override the tool, AP reported.

Erin Dalton, director of Alleghany County’s Department of Human Services told AP, “Workers, whoever they are, shouldn’t be asked to make, in a given year, 14, 15, 16,000 of these kinds of decisions with incredibly imperfect information.”

The U.S. Justice Department and other critics expressed concern that by utilizing the AI tool, the data collected can enforce discrimination against low-income families based on race, income, disabilities, and other characteristics.

The information is collated based on personal data including whether the family has a history of substance abuse and mental health issues, has served jail time, has a record of probation, and other government-issued data. Social workers can then gather the information created by Alleghany’s AI tool to determine which families should be investigated for neglect.

According to AP, the system cannot determine if the family loses their welfare benefits but it can lead to children being removed from the home, place in foster care, and potentially terminate parental rights.

In the wake of AP’s investigation, Oregon determined it would sever its use of AI-generated algorithms in its child welfare system due to racial equality concerns. Oregon’s Department of Human Services announced the change in an email to staff in June of last year over concerns that the algorithm was employing racial disparities.

Lacey Andresen, the agency’s deputy director, said in an email obtained by NPR, “We are committed to continuous quality improvement and equity,” while a department spokesperson told the outlet the algorithm would “no longer be necessary” but would not provide additional information about the policy change.

Sen. Ron Wyden (D-Oregon) said in a statement that the algorithm should not be relied on when it comes to child protective services. “Making decisions about what should happen to children and families is far too important a task to give untested algorithms,” Wyden said. “I’m glad the Oregon Department of Human Services is taking the concerns I raised about racial bias seriously and is pausing the use of its screening tool.”

#Algorithm #Raises #Concerns #Child #Protective #Services

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like