Society for Social Work and Research

Sixteenth Annual Conference Research That Makes A Difference: Advancing Practice and Shaping Public Policy
11-15 January 2012 I Grand Hyatt Washington I Washington, DC

157 Practical Uses of IRT and DIF for Social Work Research and Evaluation

Saturday, January 14, 2012: 4:30 PM-6:15 PM
Wilson (Grand Hyatt Washington)
Cluster: Research Design and Measurement
Speakers/Presenters:
Carl F. Siebert, MBA, MS, Rutgers University and Darcy Clay Siebert, PhD, Rutgers University
Most of us have experienced surprising findings in our research – respondents answering questions in an unexpected way, a lack of replicated findings when using a standardized measure, and so on. Before coming to a conclusion about the meaning of the unexpected finding, it may be important to determine if a question or measure has been interpreted differently by subgroups of your sample. Item Response Theory (IRT) and Differential Item Functioning (DIF) provide a probabilistic look into people's responses both numerically and graphically. This workshop will discuss a variety of situations in which IRT and DIF can help researchers explore their data sets more thoroughly, and answer questions like:

• Can unexpected findings be a consequence of group membership, even when not discernable by traditional analytic methods?

• Do questionnaire items measure something other than what they were intended to measure? If so, can this be attributed to subgroup characteristics?

• Do particular questions or measures place a subset of the respondents at a disadvantage?

• Can the differences in responses between groups of people be viewed graphically?

IRT, also called latent trait theory, uses the responses to a set of items to estimate the responder's position on a latent construct, and then controls for that position while creating a probability model on the responders' answers to the items. Latent constructs tend to be the issues that are difficult to measure but of great interest to many social work researchers – constructs such as social support, prejudice, and resilience, among many others.

DIF occurs when respondents from two or more different groups have the same position on a latent construct but respond differently to an item or question. Responding differently despite the same position on the latent construct tells us that the item is biased; the item is measuring more than what the item was constructed to measure. Many times the bias places one of the groups at an unexplained disadvantage, and it can undermine the validity of the measure.

To address this very real concern about our measures and our research, we must make use of the statistical procedures that are available to us. This workshop will provide an overview of both IRT and DIF, and demonstrate a variety of situations in which they can be used to improve our research. For example,

• Identifying potentially biased items in a measure, using the CES-D as an example

• Exploring response patterns graphically, using a locus of control scale as an example

• Using both Mplus 6.1 and free software to conduct an IRT/DIF analysis

Participants should expect to experience a rich exchange of ideas along with the specific examples. Materials and examples will be provided digitally during the workshop or upon request at a later time.

This workshop assumes that the participants are not familiar with IRT and DIF, or have had limited expose to these techniques. However, the information provided by this workshop will likely expand the awareness of even more experienced IRT/DIF researchers.

See more of: Workshops