ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Tuesday 18th July, 16:00 - 17:30 Room: N AUD4


Improving Response in Government Surveys 2

Chair Dr Kerry Levin (Westat )
Coordinator 1Dr Jocelyn Newsome (Westat)
Coordinator 2Ms Brenda Schafer (Internal Revenue Service)

Session Details

Government-sponsored household surveys continue to face historically low response rates (Groves, 2011; de Heer, 1999). Low response can correlate with response bias, since nonresponse is rarely uniform across groups or subgroups of interest. As a result, many government surveys focus heavily on maximizing response, often at a high cost (Tourangeau & Plewes, 2013; Stoop et al, 2010). For example, the cost of the U.S. census has more than doubled over the past several decades, from $16 per household in 1970 to $98 per household in 2010 (2010 Census GAO Report).
Survey methodologists have sought to address the problem of nonresponse by implementing a number of interventions to increase respondent engagement, minimize burden, and reduce response bias. Methodologists have suggested a variety of intervention strategies, including: offering surveys in multiple survey modes; limiting survey length; emphasizing official sponsorship; offering incentives; making multiple, distinct communication attempts; utilizing respondent-centric design; and targeting messaging to the intended audience (Dillman, 2014). However, the effectiveness of these strategies vary widely, and researchers continue to explore how to best implement these strategies in a cost-effective manner.
This session will explore innovative approaches for increasing response in government-sponsored household surveys. Researchers are invited to submit papers discussing experiments or pilot studies on any of the following topics:
• Multimode survey design, including optimizing for mobile devices, to encourage response;
• Improving response through the use of incentives, targeted messaging, or multiple distinct forms of follow-up;
• Interviewer techniques that encourage response;
• Use of paradata to improve response;
• Impact of survey length on response; and
• Attempts to increase response from hard-to-reach populations of interest.

References
2010 Census: Preliminary Lessons Learned Highlight the Need for Fundamental Reforms. GAO- 11-496T. Washington, D.C.: April 6, 2011.
De Heer, W. (1999). International response trends: results of an international survey. Journal of official statistics, 15(2), 129.
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method. John Wiley & Sons.
Groves, R. M. (2011). Three eras of survey research. Public Opinion Quarterly, 75(5), 861-871.
Stoop, I., Billiet, J., Koch, A., & Fitzgerald, R. (2010). Improving survey response: Lessons learned from the European Social Survey. John Wiley & Sons.
Tourangeau, R. & Plewes, T. J. (Eds.). (2013). Nonresponse in social science surveys: a research agenda. National Research Council.

Paper Details

1. An examination of seasonal response rates during a year-long mail data collection using an ABS frame
Mr Eric Jodts (Westat)
Dr Sharon Lohr (Westat)

Most surveys are limited to a shortened time frame for reasons of expediency and timeliness. The few surveys with longer field periods often release all sample up front, while comparisons of independent surveys are often subject to confounding variables such as differences in protocol, content, sponsors, materials, instruments and so forth. Such factors hinder a straightforward evaluation of seasonal response rates. Our study gauged respondent annoyance on a number of neighborhood environmental factors. Since these attitudes can vary depending upon season our design required a 12-month field period, with periodic sample release, to capture potential seasonal variation. Our population of interest were households exposed to a certain level of noise around selected U.S. airports. Therefore, an address-based sample (ABS) was most appropriate. Using an ABS frame we released sample in 6 waves, each two months apart to provide a full year of data. The mail protocol for each wave took 6 weeks from start to finish providing us with daily responses over the data collection year. Here we present the results of our study showing the response rate impact of season on an ABS mail survey where all other variables remained constant. Our findings were not always in alignment with conventional wisdom. We will provide considerations for timing your mail data collection or adjusting sample to address potential fluctuations.


2. Improving response rates in the German Health Survey GEDA
Mrs Jennifer Allen (Robert Koch Institute)
Mr Matthias Wetzstein (Robert Koch Institute)
Mr Patrick Schmich (Robert Koch Institute)

The ‘German Health Update’ (GEDA) study is a population-based cross-sectional health interview survey conducted on behalf of the German Federal Ministry of Health by the Robert Koch Institute (RKI) in the German adult population. GEDA is one of the three components of the German Federal Health Monitoring program at the national level being operated by the RKI. Three GEDA waves have been carried out as telephone interview surveys between 2009 and 2012 in which more than 60,000 respondents participated.

Because of decreasing response rates and increasing costs a design-switch was made for the most recent GEDA wave (GEDA 2014/2015-EHIS). An informed sequential mixed-mode data collection design was developed based on the experiences gained with two pilot studies (2012 and 2014). Mixed-mode here is defined as using one survey instrument with two or more data collection modes. In GEDA 2014/2015-EHIS two different modes of data collection were used: a self-administered web questionnaire (SAQ-Web) and a self-administered paper questionnaire (SAQ-Paper).

In the first pilot study, conducted in 2012, two different mixed-mode designs were tested: a sequential and a concurrent mixed-mode design. In a feasibility study in 2014 the design was further modified and optimized. Additionally different incentives were used and tested in this survey to find the most effective and cost-efficient strategy to be implemented in future GEDA surveys. The sample was divided into four groups: the first group received a postal stamp with the invitation letter; the second group was guaranteed a 10€-voucher after participating. The third group could take part in a lottery (50€-voucher) and the last (control) group wasn’t offered anything. The results showed that incentives have a positive effect on participation rates. However, the different incentives had very different effects depending on various sociodemographic characteristics of the respondents.

In total, 24.824 questionnaires were completed in the GEDA 2014/2015-EHIS survey (45,3% via web and 54,7% via paper). The response rate was 27,6% (in GEDA 2012 the response rate was 22%) .

In this presentation we will cover the development of the GEDA study, from a telephone to a mixed-mode survey. We will focus on the different measures taken to improve response rates which were tested in two pilot studies and discuss their implications for the main study GEDA 2014/2015-EHIS.


3. Impact of a shortened follow-up survey on improving response to a government household survey
Mr Pat Langetieg (IRS)
Ms Brenda Schafer (IRS)
Dr Saurabh Datta (IRS)
Dr Jocelyn Newsome (Westat)
Ms Jennifer McNulty (Westat)
Ms Hanyu Sun (Westat)
Dr Kerry Levin (Westat)

In the face of historically low response rates, researchers have explored whether shorter surveys can reduce nonresponse bias without compromising data quality. Although some studies have shown lengthier surveys encourage satisficing, item non-response, or increased “Don’t Know” responses (Malhotra, 2008; Deuskens et al., 2004; Galesic & Bosnhjak, 2009), a 2011 meta-analysis conducted by Rolstad et al. found that the impact of questionnaire length on data quality was inconclusive.
In this study, we explore whether using a follow-up shortened survey with non-respondents increases survey response in an IRS survey without negatively impacting data quality. The IRS Individual Taxpayer Burden (ITB) Survey measures the time and money respondents spend to comply with tax filing regulations. It is conducted annually with about 20,000 respondents. Data collection includes six contacts: (1) a prenote from the IRS, (2) a survey packet, (3) a reminder postcard, (4) a survey packet, (5) a reminder phone call or postcard, and (6) a survey packet. The survey itself is comprised of 2 critical items that ask directly about time and money, along with 24 other items that provide context to respondents by asking more generally about the tax filing process. The shortened version includes only the time and money items and eliminates the contextual items.
For the 2013 ITB Survey, we conducted an experiment where half of non-respondents received the original, 23-item “long” version of the survey and half received the “short” version for the sixth contact. The results from this experiment suggested that the short form did not impact data quality. Surprisingly, the short form did not improve survey response (Newsome et al, 2015). We hypothesized that response rates may not have improved because the short form was administered as the final contact, when it is more difficult to encourage response. In addition, both surveys were mailed in the same-sized envelope, which meant that respondents might not have even realized they were receiving a shorter version of the questionnaire.
Because the ITB Survey is conducted annually, we had the opportunity to refine our methodology for the 2014 administration. For this administration, all non-respondents received the “short” survey at the sixth contact. Additionally, at the fourth contact, half of non-respondents within select hard-to-reach groups were sent the “short” survey. In an attempt to better distinguish the “short” survey from other mailings in 2014, it was sent in a smaller envelope and included the phrase “short survey” on the survey packet.
In this paper, we examine the impact of the short version on overall response rates, as well as the impact on specific populations that have been historically underrepresented in the survey (e.g., younger adults, low income respondents, and parents with young children). We also assess the impact of the shortened version on data quality. In particular, we are interested in whether removing the contextual items results in respondents giving higher or lower estimates of their time and money burdens as compared to the long version.