Abstract: Examination of the Number of Halton Draws Required for Valid Estimation of Random Parameters in Mixed-Logit Models of Data from Discrete Choice Experiments (Society for Social Work and Research 24th Annual Conference - Reducing Racial and Economic Inequality)

337P Examination of the Number of Halton Draws Required for Valid Estimation of Random Parameters in Mixed-Logit Models of Data from Discrete Choice Experiments

Schedule:
Friday, January 17, 2020
Marquis BR Salon 6 (ML 2) (Marriott Marquis Washington DC)
* noted as presenting author
Alan Ellis, PhD, Assistant Professor, North Carolina State University, Raleigh, NC
Esther deBekker-Grob, PhD, Assistant Professor, Erasmus University Rotterdam, Rotterdam, Netherlands
Kirsten Howard, PhD, Professor, University of Sydney, Sydney, Australia
Kathleen Thomas, PhD, Associate Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
Emily Lancsar, PhD, Professor, Australian National University, Canberra, Australia
Mandy Ryan, PhD, Professor, University of Aberdeen, Aberdeen, United Kingdom
John Rose, PhD, Professor, University of Technology Sydney, Sydney, Australia
Background and Purpose

Discrete choice experiments (DCEs) are widely used to measure healthcare preferences. As a potential tool for incorporating preferences of racially and socioeconomically diverse populations into intervention design, DCEs could enhance social work research. Increasingly, analysts use mixed-logit models to analyze DCE data, with Halton draws to simulate random parameters. This approach assumes uncorrelated random parameters with certain (often normal) distributions. Using too few Halton draws may violate these assumptions, causing bias, inaccurate standard errors, and suboptimal intervention, program, or policy recommendations. However, guidance about number of draws is lacking. Systematic review data show that number of draws used is rarely reported, highly variable, and unrelated to number of random parameters. We developed guidance about the number of Halton draws to use in mixed-logit models.

Methods

In R, we simulated random parameters using 50 Halton sequences with 50 to 10,000 draws. We (1) assessed normality of random parameters by plotting the results of univariate (Shapiro-Wilk) and multivariate (Henze-Zirkler) normality tests, (2) measured and plotted correlations among random parameters, (3) assessed bias and relative efficiency in a real-data example, using mixed-logit models with 5, 10, and 15 random parameters and 250 to 20,000 draws, and (4) evaluated current practice by overlaying plots with data on modeling practices from 40 recent health-related DCEs. 

Results

Univariate normality. With 500 draws and 10 random parameters, or 1,000 and 12 respectively, at least one random parameter departed from normality. With 500 draws and 17 random parameters, or 1,000 and 22 respectively, half of the random parameters departed from normality.

Multivariate normality. With 7 or more random parameters, the Henze-Zirkler p-value decreased. Keeping the p-value above .05 with 11 random parameters required 4,000 draws. Based on actual modeling practices, 16/40 recently published DCEs (40%) likely used insufficient draws to achieve multivariate normality. 

Correlations among random parameters. Keeping correlations below 0.2 required 250 draws when there were 10-15 random parameters and 1,000 draws when there were 22 random parameters. Based on actual modeling practices, 5/40 recently published DCEs (13%) likely had correlations >0.1 between random parameters and 2/40 (5%) likely had correlations >0.2, violating model assumptions.

Real-data example. In models with more random parameters and fewer draws, we observed bias and incorrect standard errors. With 15 random parameters, estimates were unstable even with 20,000 draws.

Conclusions and Implications

Stable mixed-logit estimation requires <10 random parameters and at least 1,000 to 2,500 draws. Only 14/40 recent DCEs (35%) met both conditions. Future studies should develop more specific guidelines and explore alternative methods. Meanwhile, number of draws should increase with number of random parameters, exceed customary levels, and be reported. Analysts should use sufficient draws for all analyses, then use even more draws to verify final results. Although DCEs hold promise for social work, failure to model the data appropriately may result in biased estimates, incorrect standard errors, and poor intervention, program, and policy decisions.