Methods:
Administrative data were obtained from a prison-based SOT program. The sample includes 373 participants who volunteered for treatment, released from prison between 1999-2003, and were followed until 2008. Not all sex offenders who volunteered for SOT, received SOT. Thus, the untreated group had been deselected for treatment based on unknown criteria. Four PSA methods greedy matching with and without generalized boosted regression (GBR) and propensity score weighting with and without GBR were used to balance selected conditioning covariates and assess treatment effects. Cox proportional hazards was used to assess differences in hazard rates for re-incarceration for treated and untreated participants for the matched and original samples. Propensity score weights were used to assess effects within weighted sampling schemes. The logarithm of odds ratio receiving treatment (logit) was used for model comparison.
Results:
Bivariate tests showed that many variables were significant (p <.05) pre-matching. Post-matching bivariate tests were examined for remaining selection effects for the greedy matching models. Imbalance checks were assessed for the propensity score weighting approaches.
The covariance-control model indicated inflated treatment effects (n =373;HR=.453;p=.001). Those models that performed best, did not show significant effects of SOT nearest neighbor (n=212;HR=.647;p=.114) and nearest neighbor with GBR (n=220;HR=.599;p=.07) Propensity score weighting (n=373) could not remove covariate imbalance. However, there was convergence across all models -- a hazard ratio that is less than 1, meaning that the treatment slowers the hazard rate of being re-incarcerated. Given that the results of any of these methods remain subject to unmeasured bias, these findings are assuring and revealing.
Implications:
This study demonstrates the utility of corrective modeling approaches when experimental designs are not feasible. Model comparisons showed the influence of selection bias on estimates of effects. The results caution social work researchers against relying on one corrective modeling approach. Examined alone, the significant reductions in sample size in some PSA models may provide misleading causal inferences. This underscores the importance of testing multiple PSA models to ensure that a balanced sample is achieved and to maximize sample retention to preserve statistical power.