ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 14:00 - 15:30, Room: No. 22

Attrition in Panel Surveys - Prevention and Correction 2

Convenor Mr Ulrich Krieger (MEA, Max Planck Institute for Social Law and Social Policy)
Coordinator 1Dr Peter Lugtig (Department of Methods and Statistics, Utrecht )
Coordinator 2Dr Galit Gordoni (Israel Central Bureau of Statistics)

Session Details

Focus:
In recent years, longitudinal surveys have become increasingly popular for studying change and stability in a wide variety of phenomenon within society. The availability of accurate and valid data, therefore, on change and stability is paramount. This session focuses on one of the most important sources of error and bias: drop-out or attrition.

Correction:
Differential non-response in panel surveys can lead to large errors over the course of the study as the survey estimates may become biased. Research into the nature and causes of attrition in various panel surveys typically finds that attrition often coincides with major events in the lives of respondents. In addition, fieldwork procedures, survey innovations, social and psychological characteristics of the respondents as well their survey experiences are also thought to be factors generating longitudinal non-response error. Knowledge of these correlates can serve as a basis for correction methods to correct for attrition.

Prevention:
Rather than correct for attrition, it is better to prevent attrition from occurring. Evidence for what procedures are successful in limiting attrition is however scarce. We invite contributions that assess different ways to prevent attrition, and evaluate their consequences on attrition rates, and attrition bias.

Session Details:
Examples of contributions sought for this session include but are not limited to:
• Experimental studies contrasting different survey procedures and their effect on attrition
• Examples of best practices in preventing attrition from occurring.
• Examples of studies where attrition has been corrected by post-survey adjustments
• Papers that propose new statistical methods for correcting or ameliorating the effects of attrition.
We particularly invite methodological papers that incorporate experimental data.

4th co-organizer: Dr. Jon Miller, jondmiller@umich.edu, University of Michigan


Paper Details

1. Using response propensity models to target fieldwork interventions: evidence from an experiment with interviewer incentives on the Millennium Cohort Study

Mr Andrew Cleary (Ipsos MORI)
Ms Lisa Calderwood (Centre for Longitudinal Studies, Department of Quantitative Social Science, Institute of Education)
Mr Giulio Flore (Ipsos MORI)
Professor Richard Wiggins (Centre for Longitudinal Studies, Department of Quantitative Social Science, Institute of Education)

There is lots of evidence that using respondent incentives can increase response rates and, in the context of a longitudinal survey, reduce attrition (Laurie and Lynn, 2009). However, there is relatively little evidence on the effect of interviewer incentives on preventing non-response and attrition.

Incentives can also be a relatively inefficient use of fieldwork resources as many participants are willing to take part without them. One of the major advantages of longitudinal surveys is that data from prior waves can be used to target fieldwork resources on sample members who are least likely to take part. However, whether longitudinal surveys are able to do this effectively depends on how accurately future participation can be predicted.

This paper will present evidence from a randomised field experiment on the UK Millennium Cohort Study evaluating the effectiveness of interviewer incentive payments compared with non monetary incentives.
Response propensity models, using both survey data and para-data from prior waves, were used to identify sample members who were least likely to co-operate. These cases were randomly assigned to different fieldwork batches, and the incentive payment to interviewers was introduced on the second batch.

We will also evaluate the accuracy of the model predictions using fieldwork outcomes and discuss the benefit of using response propensities, rather than simpler indicators such as prior response history.



2. Measuring mode preferences: issues and possibilities

Dr Olena Kaminska (ISER, University of Essex)

Longitudinal studies provide possibilities to tailor future wave design to respondent's preferences, among them mode of interview. Assigning preferred mode to each respondent can potentially be efficient in terms of effort (e.g. number of attempts), and advantageous in retaining respondents. Our study explores which measure of mode preference performs best in terms of measurement error and predictive power of participation in different modes. As part of Innovation Panel (UK) we asked respondents about their most and least preferred interview modes (face-to-face (F2F), postal, telephone or web), and to rate the chance that they would participate in the next wave if they are contacted via each mode (except F2F as this was the mode of interview). By comparing responses when the preference questions were followed or preceded by the rating questions, we find that the former, but not the latter, are prone to lower reliability. In the following wave respondents were randomly assigned to F2F protocol or web with F2F follow-up protocol. All three types of questions perform well in predicting participation in both protocols. Respondents with preference for F2F and telephone modes have higher participation in F2F mode in the following wave. Web /F2F preference predict not only participation at web contact, but also at the follow-up F2F contact: while respondents who prefer web are more likely to participate in web, they are least likely to participate in a conditional F2F follow-up. Potential use of these findings in practice is discussed.


3. Auxiliary Observations? The Utility of Cross-Sectional Data to Adjust Estimates in Panel Data Subject to Attrition

Ms Veronica Roth (Penn State University)
Professor David Johnson (Penn State University)

In longitudinal data analysis, the researcher must evaluate whether selection bias or panel conditioning biases estimates derived from later waves. Imputing the data for wave nonresponse makes better use of all available information than alternative approaches to attrition. Because less is known about the attritors in the missing wave, the imputed values may not be optimal. When cross-sectional responses are added to a panel re-interview, including information from these cross-sections to inform the imputation process may yield more accurate and reliable imputed values. This study uses two datasets to explore the utility of imputing combined panel and cross-sectional data to yield more accurate estimates. The General Social Survey, a nationally representative personal interview of US adults, used a panel design in 2006 with follow-up surveys in 2008 and 2010. Both follow-up waves contain a fresh cross-section of respondents. The Marital Instability over the Life Course Study (MILC) began in 1980 and had five subsequent waves, ending in 2000. The final wave also included a cross-section of respondents eligible for the 1980 survey. Using these two surveys, we will analyze the ability of cross-sections to act as auxiliary data to adjust panels from 2 to 20 years apart. Using demographic and attitudinal measures of these surveys, we will compare imputations performed with and without the refresher panel. We will discuss the implications of our findings for both future data collection regarding refresher panels and post-collection adjustment using multiple imputation.


4. Strategies for sustaining panel participation: Lessons from the Longitudinal Study of American Youth

Dr Jon Miller (University of Michigan, USA)

The Longitudinal Study of American Youth (LSAY) is a 26-year-old longitudinal study of a national probability sample of public school students. In 1987, a national probability sample of 3,000 7th grade students and 3,000 10th grade students was selected and the study can account for 97% of these original participants more than two decades later. Currently, 5,100 of the original students are still eligible participants (approximately 900 have died, become incapacitated, or have left the country) and approximately 4,000 of these young adults provide annual data questionnaires for the study. This paper will outline a set of strategies that have been used in the LSAY to build and sustain this response rate.

The 26-year experience of the LSAY is noteworthy because it has bridged data collection methods ranging from printed questionnaires administered in school classrooms to telephone interviews to online data collection for 75% of respondents. Over this same period, incentives for participation has evolved from classroom lotteries for a Sony Walkman to various levels of cash payment. Currently, a quarterly research report written at a quality newspaper level has increased participation and loyalty to the study. It is important for longitudinal study directors to examine the effectiveness of retention methods and to share their findings.