The workshop will focus on the following topics.
1. In all program evaluations, evaluators must balance data to meet the assumption about strongly ignorable treatment assignment. A Monte Carlo study comparing three simple corrective methods (i.e., regression, matching, and subclassification) under five scenarios of data generation shows that regression, and simple corrective method in general, does not automatically correct for violation of nonignorable treatment assignment. 2. When the endogeneity problem is present, evaluators face a number of choices to balance data. These corrective approaches include regression on the propensity score, matching on the propensity score, weighting and regression, blocking and regression, kernel-based matching, regression discontinuity design, instrumental variables, Bayesian approaches, and more. Among these, matching estimators, propensity score weighting, and propensity score subclassification are the most popular ones and have been increasingly adopted by social behavioral scientists (Guo & Fraser, 2010). Following Heckman and Robb (1988), a Monte Carlo study simulating two conditions (i.e., “selection on observables” and “selection on unobservables”) shows that regression is more “asymptotically biased” and less robust than the three popular methods of PSA. This is particularly true when selection is on unobservables. 3. Implications. Focusing on three typical evaluation problems in social work research (i.e., evaluations of school-based interventions, substance-abuse treatment programs, and child-welfare interventions), we discuss implications of the endogeneity problem, why regression or simple covariance control may yield biased findings, and the importance of applying robust approaches under these circumstances to enhance evaluation's internal validity.