Selecting appropriate outcome measures is a critical element of a strong and culturally responsive evaluation. When researchers rely on previously validated measures that have not been examined from a social justice lens, they run the risk of privileging the Western ideals and implicit biases that are pervasive in widely used instruments. Using data from an evaluation of the Ohio Kinship and Adoption Navigator Program (OhioKAN), this study describes an alternative approach to survey measure development aimed at sharing power and decision making with the service population and service providers by privileging their perspectives from the beginning of the outcome measure selection process.
Methods
The OhioKAN evaluation team’s approach to measure development included the following activities:
- Crowdsourcing: Using a three-item open-ended survey, we solicited feedback from kinship and adoptive caregivers that had participated in the program. The survey asked caregivers to describe the outcomes they would like to see for themselves and their families. Using an inductive approach, a multi-racial and multi-ethnic team qualitatively coded responses from 59 caregivers. Emergent themes were used as the basis for the prioritization of evaluation outcomes, while the qualitative text offered specific language to generate an initial pool of survey items.
- Collaborating with service providers: We discussed caregivers’ perspectives with program staff and advisors to confirm alignment with the program’s goals and theory of change. With advice from program partners, we identified existing measures that had been used in other kinship navigation programs, were previously validated, and aligned with priority constructs identified by caregivers.
- Pre-testing measures: We assessed the worldview, face validity and understandability of newly-developed items and existing measures in three ways: (a) with an expert advisor on culturally responsive and equitable evaluation (CREE), (b) during cognitive interviews with 46 kinship and adoptive caregivers selected via maximum diversity sampling, and (c) in consultation with program staff. Feedback was used to establish content validity or otherwise revise or discard problematic items.
- Pilot testing: 109 OhioKAN families completed a pilot survey that included the newly-developed measures and previously validated measures. Psychometric properties were assessed.
Results
Results from crowdsourcing highlighted the need to measure caregivers’ (a) perception of their caregiver capacities, (b) access to resources, and (c) social support. Research validated measures to assess these constructs had acceptable psychometric properties but revealed problems of implicit bias during pre-testing. Two newly-developed measures demonstrated good content and multi-cultural validity during pre-testing, and good internal consistency and either criterion related or convergent validity (20-item caregiver capacities measure: α=.92, r=.5, p<.001; 8-item community supports to access resources measure: α=.90, r=.5, p<.001). A 10-item adaptation of a validated measure of social connections showed good content and multi-cultural validity, and good psychometric properties (α=.94, r=.3, p<.01). Newly developed measures showed higher internal consistency than the previously validated criteria they were validated against.
Conclusions
An innovative process for developing surveys to measure outcomes by privileging the perspectives of the service population can yield relevant measures with increased multi-cultural validity, while outperforming previously validated measures in terms of psychometric properties.