ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Thursday 20th July, 14:00 - 15:30 Room: Q2 AUD1 CGD

Assessing the Quality of Survey Data 5

Chair Professor Jörg Blasius (University of Bonn )

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Paper Details

1. Attitudes towards Surveys, Evaluation of Survey Experience and Respondents’ Susceptibility to Nonresponse in Online Panels
Mr Niklas Jungermann (University of Kassel, Germany)
Professor Volker Stocké (University of Kassel, Germany)

The prevalence of nonresponse is an important determinant of survey data quality, which is the more strongly the case the more respondents and non-respondents differ with respect to the variables of interest. This is in particular true, when the two reasons for missing information, item- and unit-nonresponse, are caused by the same factors. In the present paper we analyze the effect of respondents’ general evaluation of surveys on their disposition to leave questions unanswered and to (temporarily) dropout from a panel study. Since the general attitudes toward surveys have proven to be multi-dimensional we want so find out, which evaluation criteria are most relevant for causing item- and unit-nonresponse. Some of the attitude dimensions are related to the respondents’ self-interest (e.g., the enjoyment of survey participation), whereas others express surveys provision of a collective good (e.g., being important for society). In research about panel data quality it is important to what degree and with which consequences the participation experience in the prior panel waves shapes the respondents survey attitudes and, thus, causes their nonresponse behavior in subsequent waves. In this sense, by shaping the survey experience of respondents, survey researcher may at least be partly responsible for the data quality of their own surveys. Our paper aims to answer the following questions: First, we investigate whether respondents’ attitudes towards surveys cause (a) item nonresponse and (b) (temporal) dropout from a panel study. Second, we want to find out the structure of survey attitudes and which of the evaluative dimensions is most relevant for nonresponse. The third question is, whether the respondents’ survey experience from prior waves influences the completeness and thus quality of data in subsequent panel waves. Are the attitudes towards surveys the mediating factor of this relationship?
We utilize data from wave nine to twelve of the GESIS-Panel in order to answer the research questions. The GESIS-Panel is a bimonthly probabilistic panel, representative for the German adult population. As a mixed mode survey, the GESIS-Panel offers participation via an online or a mail questionnaire. Attitudes towards surveys are measured once in the ninth wave, whereas the respondents evaluate their survey experience at the end of each questionnaire. Item Nonresponse is measured as the percentage of unanswered questions of all questions administered in the four panel waves, which were administered to all respondents. Because of the special significance, we analyze the income question separately. Unit-nonresponse takes place, when a respondent fail to participate in a wave at all.
Our results indicate that attitudes toward surveys consist of their evaluated value, the respondent’s enjoyment of participation and it’s burden. These three dimensions are found to be predictive for item- as well as unit-nonresponse. However, the strength of the effect differs between the tree sub-dimensions and the different kinds of nonresponse. We furthermore find that the evaluated survey experience of the prior wave has substantial effects on the disposition to nonresponse in the following wave.

2. Does Undercoverage on the United States Address-based Sampling Frame Translate to Coverage Bias?
Dr Ashley Amaya (RTI International)
Dr Stephanie Zimmer (RTI International)
Ms Katherine Morton (RTI International)
Dr Rachel Harter (RTI International)

Address-based sampling (ABS) using the United States Postal Service’s (USPS) Computerized Delivery Sequence (CDS) file has become increasingly popular over the last two decades. It has significantly reduced the cost of field studies because it eliminates (or, at least, greatly reduces) the need for field listing (Iannacchione 2011). It also offers the potential for multimode data collection, resulting in higher response rates and lower nonresponse bias than random-digit dial (RDD) (Brick et al. 2011).

While most residential addresses are included on the CDS, it still suffers from some undercoverage. Addresses are not on the frame. Addresses are purposely dropped from the frame prior to sample selection. Geocoding error may also result in the exclusion of addresses when they are incorrectly geocoded outside of the selected geographies. Based on the data collection mode and the difference between housing unit counts on the CDS and the Census Bureau’s 2014 U.S. Population Estimates count of housing units, we estimate national surveys suffer from 6.0 to 10.5 percent undercoverage with the potential for much higher undercoverage in smaller geographies.

Coverage bias is a function of both the undercoverage rate and the difference between the covered and uncovered units on the variable of interest. While significant research has been conducted to assess CDS coverage rates (Battaglia et al. 2008; Iannacchione et al. 2003; Link et al. 2008; McMichael et al 2010; Montaquila et al. 2009; Montaquila et al. 2011; O’Muircheartaigh et al. 2007), minimal work has been published on the effect of undercoverage on coverage bias. In this paper, we assess (1) for a given undercoverage rate and sample size, how different the uncovered units would need to be to significantly change (i.e., bias) the estimates, (2) what geographies and types of variables have higher risk of coverage bias, and (3) whether different weighting techniques correct identified bias.

We use a combination of survey data and Monte Carlo simulation models to vary the coverage rate and difference between the covered and uncovered units to estimate the true value. The models are informed by point estimates and distributions from a variety of surveys as well as Census housing unit estimates and expert knowledge. We run separate models to account for differences by mode, geography, different types of variables, sample sizes, and weighting schemes.

Once the simulations are complete, we will use replicated t-tests and chi-squared tests to determine what combination of undercoverage and difference produce biased estimates. In some cases, these results may be validated by survey data where interviewers listed segments to ensure uncovered units were added to the frame prior to sample selection. Based on the size of the undercoverage rate and difference, we will rank each variable and geography by risk of bias. Ideally, the rankings will identify clusters of variables or geographies with similar risk of bias. Base-weighted scenarios will be compared to weighted scenarios to identify whether or not the weights correct the identified coverage bias.

3. Data quality in PIAAC – International standards and national procedures
Mr Markus Bönisch (Statistics Austria)

The Programme for the International Assessment of Adult Competencies (PIAAC) is an international OECD survey that compares key competencies of adults (16-65 years) in 33 countries. In order to obtain high quality data and to ensure comparability between the participating countries, the international PIAAC Consortium produced an elaborate set of standards and guidelines for almost all aspects of the national implementation. In Austria, a comprehensive set of procedures was put in place for the PIAAC fieldwork. Some of the international requirements for data collection were not reasonable within the national context and required certain adaptations to accomplish a successful fieldwork. The following fieldwork procedures will be discussed:
• Sampling (person registry VS household sample)
• Interviewer payment, -motiviation and support
• Respondent motivation and incentives
• Quality control and validation
o Data checks
o Validation by phone
o Validation by registry data

The paper will talk about PIAAC and its methodological background, describe key fieldwork measures in Austria and discuss how specific measures relate to international data collection standards. Reflecting on this national experience, some of the possibilities and limitations of national compliance to international standards will be discussed.
Furthermore the multidimensional assessment of quality in PIAAC (Response Rate, Non-Response-Bias, compliance with technical standards and guidelines) will be discussed and related to national contexts. The experience of PIAAC shows relevant quality problems in two countries (Greece, Russian Federation), mostly due to interviewers or survey organizations simplifying their tasks with different methods (e.g. faking interviews). The conclusions will discuss still open issues regarding data quality in cross-national surveys (translation, cross cultural differences, sampling/weighting) and the balance between international standards for comparability and the degree of freedom to reconcile national differences.

4. Fabricated Interviews in Survey Research
Professor Jörg Blasius (University of Bonn)

The quality of survey data is a function of the three levels of actors involved in survey projects: the respondents, the interviewers, and the employees of the survey research organisations. I argue that task simplification dynamics can occur at each of these levels and the effect of such task simplifications is a reduction of the data quality. The precise form of task simplification differs at the three levels. For respondents, it might take the form of utilizing only specific parts of the available response options; for interviewers, it can take the form of asking primarily the demographic questions and fabricating plausible responses for the remainder; for employees of research institutes, it can take the form of near-duplication of entire questionnaires. Thereby, data fabrication by interviewers as well as by employees is clearly more common than most researchers would expect.
In this paper, I concentrate on the differentiation between respondents simplifying their tasks, in the literature often referred to as “strong satisficing” (e.g., respondents applying straight-lining among a large set of items), and interviewers fabricating (parts of) their interviews. In contrast to the relevant literature, I will show that at least in some countries simplification is mainly performed by the interviewers and not by the respondents. For this purpose, I use data from the European Social Survey.