Select Page

Kaitlyn Philpott[1]

In February, Netflix released a documentary miniseries titled The Trials of Gabriel Fernandez.[2]  This series captures the horrific story of the abuse of an eight-year-old child, Gabriel Fernandez, the eventual murder of the child by his mother and her boyfriend, and the systemic flaws present in Los Angeles County that failed to protect him.[3] In episode five, the series discusses the child-welfare agencies’ attempts to innovate and improve their systems by implementing predictive analytics.[4] As an example, the series presents the Allegheny Family Screening Tool (“AFST”), implemented in Allegheny County, Pittsburgh, in 2016, which assists Child Protection Hotline workers to screen which reports should be investigated.[5] The AFST uses “a statistical technique called data mining to look at historical patterns in the data,” which is then used to make predictions about a case.[6] The tool pulls data from the county’s systems that document the families and their relationships with public services; some of the factors that are considered include: family history, legal problems, drug and substance abuse, incarceration, public welfare use, and mental illness.[7] Another well-known predictive tool is the Eckerd Rapid Safety Feedback (“RSF”) tool developed by the Florida nonprofit Eckerd Kids.[8] Ten states, including Florida, have begun developing with RSF.[9] However, in late 2017, Illinois stopped using the tool due to its unreliability; it flagged an alarming amount of cases needing protection while also failing to flag several serious cases.[10]

Most of us are familiar with predictive analytics being used to tailor content in our Facebook news feed based on our interests or to tailor Netflix suggestions based on our viewing history, but many are not familiar with the use of it in major government decision-making.[11] Although there are proponents that believe in the benefits of predictive analytics in child welfare, including efficiency and consistency, there are dangers, including the transfer of biases and issues arising with the use of these tools in legal proceedings.

To begin, humans initially input the data used in predictive analytics; therefore, human error and bias is translated into the predictive tools, illustrating the “garbage in, garbage out” phenomenon.[12] Further than human bias transfer, scholars argue that predictive tools are not reliable for making risk predictions on a specific, individual level due to the transfer of historical bias.[13] As noted with AFST, these tools pull data from public systems that concentrate on populations and prevention while child welfare generally focuses on investigating and prosecuting on an individual case-by-case basis.[14] Poor communities and minority communities have historically been “oversurveillanced” and overpoliced; therefore, they are overrepresented in data sets.[15] Another reason these communities are overrepresented is because wealthier individuals that have the means to access private services will present with lower risk scores while individuals using the same services but through the county will receive higher risk scores.[16] Errors can result in unwarranted intervention and family separation or in the failure to intervene when protection is needed.[17] On an even more specific level, the tools cannot account for family-member behavior, rehabilitation, or other important factors; there are too many variables in child welfare, including factors the tools simply cannot compute.[18]

In The Trials of Gabriel Fernandez, four social workers on the case were charged with criminal negligence.[19] What could have been legally different if the workers had utilized predictive analytics like AFST? Culpability likely will become more difficult to challenge and trace with the implementation of predictive analytics because you will have not only the workers but also the developers and programmers of a predictive tool and the tool itself, whose decisions can be difficult to trace to specific reasoning.[20] Because of the misconception that predictive algorithms are neutral, risk scores from these tools may begin being introduced as “scientific” evidence to defend blamed actors, like the social workers, and to determine probable cause and child maltreatment.[21] However, these tools should not be used in child-welfare legal decision making because the tools need to be further studied and developed and because of their shortfalls.[22]

Overall, child-welfare agencies should consider predicative analytics in child welfare as valuable tools, not as infallible predictors of the future. Algorithmic tools are statistical predictions; they are not crystal balls.[23] Artificial intelligence (“AI”) and predictive analytics will continue to be integrated into all walks of life, including child welfare. Allegheny County is working on an algorithm that will assess children’s risk for maltreatment at their birth, and there are AI tools being used to provide other solutions or predictions, like matching children with foster homes.[24] We need to be aware of the dangers, learn to counteract them, and remain vigilant and accountable in our own decision-making apart from the predictive analytics to improve upon our flawed systems, especially in a case-specific and human-centered field like child welfare that aims to protect the vulnerable.

[1] J.D. Candidate 2020, Florida International University.

[2] The Trials of Gabriel Fernandez: Improper Regard or Indifference (Netflix Feb. 26, 2020).

[3] Id.

[4] Id.; see Predictive Analytics, Fla. Inst. for Child Welfare, (last visited Mar. 19, 2020) (“Predictive analytics is a technique that can be used to calculate a future event or outcome by using big data and machine learning. Big data refers to a process of gathering data from various data sources such as mental health data, substance use data, health data, and socioeconomic data.”).

[5] The Trials of Gabriel Fernandez, supra note 2; see Stephanie K. Glaberson, Coding over the Cracks: Predictive Analytics and Child Protection, 46 Fordham Urb. L.J. 307, 332 (2019).

[6] The Trials of Gabriel Fernandez, supra note 2 (quoting Rhema Vaithianathan, Co-director of Centre for Social Data Analytics).

[7] Id.; see Glaberson, supra note 5, at 332.

[8] Glaberson, supra note 5, at 332, 335.

[9] Id. at 335.

[10] Id.

[11] See Predictive Analytics, supra note 4; The Trials of Gabriel Fernandez, supra note 2.

[12] Michael Corrigan, Using Algorithms and Artificial Intelligence in Child Welfare, Chron. Soc. Change (Jan. 17, 2019),; Glaberson, supra note 5, at 337; Predictive Analytics, supra note 4; Jessica Pryce et al., Using Artificial Intelligence, Machine Learning, and Predictive Analytics in Decision-Making 6 (2018), (“In 2016, for example, AI was used to judge a beauty contest, which resulted in nearly all the 44 winners resembling White or light skinned individuals. The algorithm, it was suggested, was trained using mostly photographs of White individuals, thus the algorithm was inherently biased, resulting in unintentional biased results.”).

[13] Glaberson, supra note 5, at 360.

[14] Id. at 326; see Emily Keddell, Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice, Soc. Sci., Oct. 8, 2019, at 17 (2019) (“Statistical scores relating to group similarities are used to inform an individually focussed [sic] decision that may not reflect that specific person’s risk level, but population level risk.”).

[15] Glaberson, supra note 5, at 345.

[16] Sarah Valentine, Impoverished Algorithms: Misguided Governments, Flawed Technologies, and Social Control, 46 Fordham Urb. L.J. 364, 393 (2019).

[17] Glaberson, supra note 5, at 339–40.

[18] Corrigan, supra note 12.

[19] The Trials of Gabriel Fernandez, supra note 2.

[20] Valentine, supra note 16, at 393.

[21] Id. at 397.

[22] Glaberson, supra note 5, at 360.

[23] See Id. at 338. Although “There is a large body of literature that would suggest that humans are not particularly good crystal balls,” neither necessarily are these algorithmic tools. The Trials of Gabriel Fernandez, supra note 2 (quoting Emily Putnam-Hornstein, Director of the Children’s Data Network).

[24] Id. at 335; Valentine, supra note 16, at 403; see Kitty Knowles, Okra AI Is Helping Foster Kids Find Stable Homes, Sifted (Feb. 6, 2019), (The British startup Okra “us[es] artificial intelligence (AI) to flag less stable foster families who might need extra support—and to help better foster matches be made in the first place . . . [and] use[s] historic data on successful and unsuccessful foster matches to make evolving predictions about the stability of a match.”).