Session: Methodological Issues and Lessons Learned with Administrative Data: A State Child Welfare Example (Society for Social Work and Research 24th Annual Conference - Reducing Racial and Economic Inequality)

197 Methodological Issues and Lessons Learned with Administrative Data: A State Child Welfare Example

Schedule:
Saturday, January 18, 2020: 8:00 AM-9:30 AM
Marquis BR Salon 10, ML 2 (Marriott Marquis Washington DC)
Cluster: Child Welfare (CW)
Speakers/Presenters:
Brenda Kurz, PhD, University of Connecticut, Patricia Carlson, PhD, University of Connecticut, Megan Feely, PhD, University of Connecticut, Melissa Ives, MSW, University of Connecticut and Joshua Pierce, BA, University of Connecticut
Administrative data is a necessary component of many evaluations, and can be rich in potential. However, these data, especially those collected by public agencies, can also present major challenges to researchers including data use regulations, varying laws governing how long data are kept, competing data needs, and complex data structures.

Using the evaluation of a state administered child welfare program as a case study, research team members will discuss challenges and lessons learned from working with administrative data collected by the state's child welfare agency. Further, panelists will present their approach to addressing issues germane to administrative data sets. Finally, panelists will facilitate a conversation with participants on using administrative data. Topics include:

Selecting the index events using different time frames: Return rates are a common metric for many types of treatment evaluation. In child welfare, 'subsequent report rates' are a routinely referenced statistic. However, when families can have multiple reports over long periods of time, the seemingly perfunctory issue of choosing a reference point for defining subsequent reports vs prior reports has non-trivial consequences. Selecting this 'index' report and corresponding analysis timeframes has implications not only for the timeliness of the analysis sample, but also can interact with chronological patterns in the data, such as seasonal reporting trends or proximate re-reporting rates.

Unit of analysis: Child welfare systems report on child maltreatment at the individual child level (e.g., percent of children with substantiated maltreatment who are repeat victims of substantiated maltreatment). Logically, their administrative data are organized to report child-level information. In this evaluation the family is the unit of treatment; therefore; child-level data has to be reorganized to evaluate family-level results. Regrouping child maltreatment report data to the family level raises questions at key decision points.

Expunging: Expunging laws and policies vary across states, but the majority of states expunge at least some maltreatment reports. The specific policies and the administrative handling of expunged records has implications for working with administrative data within and across states. For example, in states where unsubstantiated records are completely expunged after five years, analyses that include prior and subsequent reports must balance how many years of prior report and subsequent report data to include in the five-year window of accurate data. Techniques for determining how expunging influences the available data and implications for analyses will be discussed.

Measures: Outcomes are often prescribed by legislation/policy for publically administered programs. While important, these outcomes are often not sufficient to fully understand the efficacy of a program requiring additional outcome measures. Challenges arise in merging additional data sources and integrating these measures.

Data quality: Administrative data are known to have data quality issues, including accuracy, consistency in collection procedures/requirements, completeness, and timeliness. When analyses involve multiple data sources, the accuracy of the identifiers used to link data is paramount. Effective communication with data providers improves understanding of the likely causes of data quality issues which can influence data-handling decisions, and may allow for feedback to correct current or prevent future issues.

See more of: Roundtables