Research That Matters (January 17 - 20, 2008)
|Saturday, January 19, 2008: 4:00 PM-5:45 PM|
|Blue Prefunction (Omni Shoreham)|
|[OTH] Introduction to Econometric Time Series Models|
|Speakers/Presenters:||Shenyang Guo, PhD, University of North Carolina at Chapel Hill|
Judith Wildfire, MA, MPH, University of North Carolina at Chapel Hill
Rebecca L. Green, MSW, University of North Carolina at Chapel Hill
Single-subject designs are popular among social work researchers to monitor clients' trajectories of change, and to evaluate process and outcome differences under various intervention and policy conditions. Although econometric time series models (ETSM) are similar to single-subject designs and are based on a more rigorous foundation of statistical theories (Nugent, Sieppert, & Hudson 2001), applications of such models are sparse in the social work literature. This workshop aims to provide an overview of ETSM to fill in the gap.
The workshop will focus on the following topics.
1. Definition and illustration of time series data. In statistics, a time series is a sequence of data points, measured typically at consecutive times and spaced at equal time intervals (Wikipedia, 2007). A time series may be outcome data observed at successive times for an individual, or outcome data observed at successive times for a group of individuals (e.g., an agency or a county). A distinctive feature of such data is non-permissible of missing data at any time point. 2. The stationary assumption. Most time-series models assume a stationary process, which is a stochastic process whose probability distribution at a fixed time is the same across all time points (Wikipedia, 2007). In social work applications, this means a fixed mean and variance of time series at all times. 3. Why the method is useful in social work research? Child welfare researchers often use an individual-level approach (such as survival analysis) to analyze foster care outcomes. An aggregate-level approach using time series data can address important research questions than the individual-level analysis, simply because “individual- and aggregate-level dynamics are inseparable” (Wulzcyn, 1996). 4. Curve smoothing approaches. In order to remove random fluctuations and thereby reveal trends in the time-series data, analysts often need to conduct curve smoothing before observing trends graphically. We will present results of a comparative study of four smoothing techniques: simple moving averages, weighted moving averages, exponentially weighted moving averages, and locally weighted regression with a tricube kernel or lowess (Fox, 2000; Pindyck & Rubinfeld, 1998; SAS, 1995), and conclude that the lowess approach is the best among the four. 5. Statistical models. When modeling time series data with a limited number of units (say, n < 10), one may consider an autoregressive regression model, which assumes an error term that follows an autoregressive process with a given order of AR(p). When modeling time series data of multiple units (n > 10), one may use a growth curve analysis or HLM with either an autoregressive or an unstructured variance-covariance matrix of random effects.
We will use a real example evaluating the North Carolina's Title IV-E Waiver Demonstration program to illustrate all points listed above. The most important finding of the study is that the Waiver counties did not differ substantially from the non-waiver counties in terms of the shape of change trajectory and the rate of change; without the Waiver Demonstration, these counties could have experienced a faster increase in number of entries into placement than the non-waiver counties.