Abstract: Dismantling Implicit Bias in Survey Measures through a Culturally Responsive and Equitable Measure Development Process (Society for Social Work and Research 27th Annual Conference - Social Work Science and Complex Problems: Battling Inequities + Building Solutions)

All in-person and virtual presentations are in Mountain Standard Time Zone (MST).

SSWR 2023 Poster Gallery: as a registered in-person and virtual attendee, you have access to the virtual Poster Gallery which includes only the posters that elected to present virtually. The rest of the posters are presented in-person in the Poster/Exhibit Hall located in Phoenix A/B, 3rd floor. The access to the Poster Gallery will be available via the virtual conference platform the week of January 9. You will receive an email with instructions how to access the virtual conference platform.

Dismantling Implicit Bias in Survey Measures through a Culturally Responsive and Equitable Measure Development Process

Schedule:
Friday, January 13, 2023
Camelback B, 2nd Level (Sheraton Phoenix Downtown)
* noted as presenting author
Sarah Kaye, PhD, Principal, Kaye Implementation & Evaluation
Lucia Reyes, PhD, Research Associate, Kaye Implementation & Evaluation, OH
Background

Selecting appropriate outcome measures is a critical element of a strong and culturally responsive evaluation. When researchers rely on previously validated measures that have not been examined from a social justice lens, they run the risk of privileging the Western ideals and implicit biases that are pervasive in widely used instruments. Using data from an evaluation of the Ohio Kinship and Adoption Navigator Program (OhioKAN), this study describes an alternative approach to survey measure development aimed at sharing power and decision making with the service population and service providers by privileging their perspectives from the beginning of the outcome measure selection process.

Methods

The OhioKAN evaluation team’s approach to measure development included the following activities:

  1. Crowdsourcing: Using a three-item open-ended survey, we solicited feedback from kinship and adoptive caregivers that had participated in the program. The survey asked caregivers to describe the outcomes they would like to see for themselves and their families. Using an inductive approach, a multi-racial and multi-ethnic team qualitatively coded responses from 59 caregivers. Emergent themes were used as the basis for the prioritization of evaluation outcomes, while the qualitative text offered specific language to generate an initial pool of survey items.

  2. Collaborating with service providers: We discussed caregivers’ perspectives with program staff and advisors to confirm alignment with the program’s goals and theory of change. With advice from program partners, we identified existing measures that had been used in other kinship navigation programs, were previously validated, and aligned with priority constructs identified by caregivers.
  1. Pre-testing measures: We assessed the worldview, face validity and understandability of newly-developed items and existing measures in three ways: (a) with an expert advisor on culturally responsive and equitable evaluation (CREE), (b) during cognitive interviews with 46 kinship and adoptive caregivers selected via maximum diversity sampling, and (c) in consultation with program staff. Feedback was used to establish content validity or otherwise revise or discard problematic items.
  1. Pilot testing: 109 OhioKAN families completed a pilot survey that included the newly-developed measures and previously validated measures. Psychometric properties were assessed.

Results

Results from crowdsourcing highlighted the need to measure caregivers’ (a) perception of their caregiver capacities, (b) access to resources, and (c) social support. Research validated measures to assess these constructs had acceptable psychometric properties but revealed problems of implicit bias during pre-testing. Two newly-developed measures demonstrated good content and multi-cultural validity during pre-testing, and good internal consistency and either criterion related or convergent validity (20-item caregiver capacities measure: α=.92, r=.5, p<.001; 8-item community supports to access resources measure: α=.90, r=.5, p<.001). A 10-item adaptation of a validated measure of social connections showed good content and multi-cultural validity, and good psychometric properties (α=.94, r=.3, p<.01). Newly developed measures showed higher internal consistency than the previously validated criteria they were validated against.

Conclusions

An innovative process for developing surveys to measure outcomes by privileging the perspectives of the service population can yield relevant measures with increased multi-cultural validity, while outperforming previously validated measures in terms of psychometric properties.