Decision-Making in Child Protective Services: Algorithms, Experts, and Child Outcomes
Paper Session
Saturday, Jan. 6, 2024 2:30 PM - 4:30 PM (CST)
- Chair: Katherine Rittenhouse, University of Texas-Austin
Measuring the Skills and Comparative Advantage of Examiners: An Application to Child Protective Services
Abstract
High-stakes examiners vary substantially in their decisions and outcomes, even amongsimilar cases. Inferring examiner skills from these decisions can allow for a deeper
understanding of variation in skills, comparative advantage over certain cases, and
optimal examiner assignment. We study this question in the Child Protective Services
(CPS) setting. CPS workers make the consequential decision of whether to temporarily
place children in foster care if they believe the child will not be safe if left at home.
Surprisingly, we know little about the quality and skills of these critical decision-makers.
We measure an investigator’s skills by comparing the subsequent maltreatment rates of
children assigned to that investigator, relative to the rates of an algorithm with identical
placement rates. This algorithm is trained to predict subsequent maltreatment risk
among the set of children who are left at home, since this outcome is unobserved for
children placed in foster care. We deal with the selective observability of subsequent
maltreatment using the quasi-random assignment of investigators to cases. We first
document substantial heterogeneity in investigator skill. A child assigned to an
investigator who is one standard deviation above the mean in the skill distribution
has an approximately 20% lower chance of being maltreated in the future if left
at home. Investigator skill increases rapidly with experience in the first few years,
but flattens thereafter. We then provide novel evidence that an investigator’s skill
varies across different types of cases, including high-risk investigations, and exploit
their comparative advantages to document large match effects. A policy simulation
that assigns investigators based on their comparative advantages, as opposed to the
status quo of random assignment, can substantially reduce aggregate subsequent
maltreatment rates in a revenue-neutral way.
Does Access to an Algorithmic Decision-Making Tool Change Child Protective Service Caseworkers’ Investigation Decisions?
Abstract
Despite concerns about their capacity to reinforce racial/ethnic biases, promotion of algorithmic decision-making tools to improve efficiency in a range of policy areas is widespread. Unfortunately, despite extensive knowledge on how these tools could be used, we understand relatively little about how these tools are actually used by humans—how exposure to them changes the way individuals, especially those with deep system knowledge, choose to behave in light of this new algorithmic information. Child maltreatment investigation decisions are a setting where high-stakes decisions for families and children are traditionally made with imperfect background information, making it an ideal setting for considering how human-algorithm interactions unfold. Using a randomized controlled trial, we investigate the effects of providing an algorithmic decision-making tool on the human decisions about whether to investigate a report of child maltreatment. We also explore whether the availability and use of the tool has effects on re referral to Child Protective Services and/or foster care placement, as well as whether access to the tool changes the amount of time required to make decisions.The Impact of Algorithmic Tools on Child Protection: Evidence from a Randomized Controlled Trial
Abstract
Machine learning tools have the potential to improve the allocation of services to recipients, but there is a limited understanding of how such tools are used by human experts in practice. We use a randomized controlled trial to evaluate the effects of human-algorithm interaction in a high-stakes public services context, Child Protective Services (CPS), where workers have about 10 minutes to decide whether to investigate a family and possibly remove a child from an unsafe home. The trial provides social workers with randomized access to an algorithmic risk score that accurately predicts whether a child will be removed from their home due to maltreatment. We find that giving workers access to the tool reduced child injury hospitalizations by 32 percent and narrowed racial disparities in CPS contact considerably. Surprisingly, despite an improvement in outcomes, workers using the tool were more likely to investigate children predicted as low-risk and less likely to investigate children predicted as high-risk, relative to the control group. Text analysis of social worker discussion notes suggests that algorithmic predictions allow workers to better focus their attention on other salient features of the allegation that may indicate maltreatment. Our results highlight the potential benefits and unexpected impacts of human-algorithm interaction in high-stakes contexts.Algorithms, Humans and Racial Disparities in Child Protective Services: Evidence from the Allegheny Family Screening Tool
Abstract
We ask whether providing decision-makers with a machine learning tool can reduce racial disparities. Our context is the implementation of the Allegheny Family Screening Tool (AFST), a predictive risk model that aims to help child protection workers decide which allegations of abuse or neglect to investigate. While the AFST does not dictate investigation decisions, referrals with the highest risk scores are "defaulted'' to be screened in. Among this group of referrals, we find that the AFST reduced disparities in investigation rates. Using a triple difference strategy, we also find that the introduction of the AFST significantly reduced disparities in case opening and home removal rates for investigated referrals involving Black vs. White children.Discussant(s)
Ezra Goldstein
,
Pennsylvania State University
Jason Baron
,
Duke University
JEL Classifications
- I3 - Welfare, Well-Being, and Poverty
- J0 - General