ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Wednesday 19th July, 11:00 - 12:30 Room: N 101

Comparative Survey Research Methodology using the European Social Survey

Chair Dr Kathrin Thomas (City, University of London )
Coordinator 1Professor Rainer Schnell (City, University of London)

Session Details

The European Social Survey (ESS) is one of the largest comparative survey
projects worldwide incorporating over 35 countries. The ESS data are available
online for academic research without charge. Furthermore, the ESS is committed
to transparency and extensive documentation about how the data are collected
from planning stage to questionnaire design, field work to data cleaning and analysis.
In providing detailed field work documentations by country and year as well
as contact sheets, para data, and related materials, the ESS data allow addressing
core questions of survey research methodology in a comparative framework.

While the large number of countries incorporated in this project provide an
exceptionally large variety of different contexts, cultural diversity, and country
specific peculiarities in how surveys are conducted, they also pose a particular
challenge on comparative survey research with regard to the data quality and
comparability. It is important to study what differences occur, why they occur,
and what effect they have on the comparability and quality of the ESS data.

We thus invite papers that focus on studying the cross-country differences in
survey methodology using the ESS data. We are particularly interested in all
aspects relating to the conceptual framework of the Total Survey Error (TSE),
which describes statistical error properties of sample survey statistics, such as
non-response, design effects, interviewer effects, para data, and response styles.

The aim of the session is to provide empirical analysis that allows evaluating
whether potential issues with regard to the TSE are prominent in the ESS data,
but also to provide informed suggestions on how to improve future ESS data collections.

The papers submitted to this session may also be relevant for other comparative
survey projects beyond the ESS, which may face similar challenges and allow
equivalent analysis strategies. In addition, the panel may attract representatives
of field organisations for advice on how to improve the data collection process.

Disclaimer: This session is aimed at rigorous, state-of-the-art quantitative
approaches to comparative survey methodology. We thus recommend that papers
focusing on the qualitative approaches to pretesting, comparative questionnaire
design, and language equivalence submit to other panels.

Paper Details

1. The Impact of Formal Question Characteristics on Design Effects in the ESS
Dr Kathrin Thomas (City, University of London)
Professor Rainer Schnell (City, University of London)

The use of Total Survey Error (TSE) framework reflects that precision of a survey estimate does not only depend on problems related to sample size and non-response, but also on non-sampling errors. While it is hardly feasible to apply the TSE model in practice, as it is difficult to separate individual effects in the available data set, it is yet possible to indirectly estimate some elements of TSE framework.

One approach is the use of design effects: design effects are the increase in the estimated standard errors in comparison to to those estimated on the basis of a simple random sample of the same size (Kish, 1965). They can be estimated relying on the interviewer intra-class correlation coefficient, which in turn allows estimating the homogeneity of responses within the set of respondents assigned to an interviewer. However, it is difficult to determine whether response homogeneity is due to spatial (social) homogeneity of the PSU or to the interviewer when looking at an individual survey.

Variance induced by the interviewer is at least partially due to formal question characteristics (Fowler 1991, Fowler & Mangione, 1990; Mangione, Fowler, & Louis, 1992; Schnell & Kreuter, 2005): For instance, studies identified difference across open- versus closed-ended, factual versus attitudinal, sensitive versus non-sensitive, and easy versus difficult questions.

This paper relies on the design effects calculated for 28 ESS countries using all ESS rounds, which we have merged with a classification of the formal question characteristics for ESS core module focusing on an improved scheme of the above mentioned attributes. Multi-level models predicting interviewer homogeneity analyse the data.

The results indicate varying degrees of design effects across ESS countries and rounds, which point at strong and worrying interviewer effects.

2. Comparing the quality of fieldwork execution of individual-name, household and address samples using internal criteria - comparative analysis based on European Social Survey data
Dr Piotr Jabkowski (University of Poznan, Poland)
Dr Piotr Cichocki (University of Poznan, Poland)

In countries participating in the European Social Survey, three main types of survey-sample have been used: (1) individual-name samples, (2) household samples and (3) address samples. This distinction stems from the characteristics of survey frames available in those countries, and exerts influence on the fieldwork phase of research. While individual-name frames allow for sample-selection of individuals uniquely identified by name, household frames involve the necessity of within-household selection of target persons among the individuals inhabiting the sample-selected households; furthermore, frames consisting of addresses of buildings require a randomised selection of apartments followed by within-household selection of individuals.

However, the main challenge for fieldwork execution of address and household samples comes in the limited capacity for effective control over the quality of interviewers' work; especially with respect to the quality of selecting target persons. Supervision is mainly constrained by the fact that unlike in the case of individual-name samples it is not enough to ascertain that the interview was conducted with the designated person - one must also corroborate that the interviewer did select the person that should have been selected. Thus, in cases of household and address samples, there is a markedly higher risk of illegal substitution, i.e., the practice (prohibited in the ESS) of replacing the persons that should be selected by those that are more available (e.g., by being more often at home) and that are characterised by higher levels of participation-readiness. The illegal substitution risk also occurs in individual-name samples, yet, given that the respondent is known by name such substitution must involve naked cheating on the part of the interviewer.

Our paper aims to explore the relationships between the sample-type and the quality of fieldwork execution. Empirical studies will be based on the first six waves of the European Social Survey. The sample quality (prevalence of substitutions) will be analysed according to the procedure proposed by Sodeur (1996) and Kohler (2007), consisting in the evaluation of the statistical significance of the difference between the fraction of women living together with a heterosexual partner in two-person households and the yardstick fraction of 50% of women living in such households. Within our presentation we will present a meta-analysis of data based on the assumptions of Borenstein et al. (2007) to compare the selection-bias and effect-size occurring in the three groups of countries distinguished by their sample-types. We will also investigate the relations between the selection-bias and the survey-outcome rates demonstrating the different character of those relations in samples of different types. We shall provide evidence that illegal substitutions are much more prevalent in address and household samples then in individual-name samples. The utilisation of the former requires supervision of not only the quality of interviews conducted, but also of the standards of within-unit selection of target individuals by interviewers.

3. Effects of the Number of Contact Attempts on Survey Quality and Costs in the European Social Survey
Ms Tanja Kunz ( Leibniz Institute for the Social Sciences, Mannheim)
Mr Marek Fuchs (Darmstadt University of Technology)

Response rates have been declining over the years. In order to compensate for potential nonresponse bias, high quality academic and government surveys have adapted and further developed their field work procedures. In recent years, it has been suggested to use adaptive designs (Wagner, 2008) or responsive designs (Peytchev et al., 2010) in order to target sample units belonging to underrepresented groups. However, many large-scale surveys still adopt a more general field work strategy. Often, the overall number of contact attempts has been increased in order to reduce nonresponse and nonresponse bias. Several studies have demonstrated (Heerwegh et al., 2007; Kreuter et al., 2014) that an increasing number of contact attempts helps boost response rates. Nevertheless, even after multiple contact attempts post-stratification and raking procedures are deemed necessary to compensate for nonresponse bias because increasing response rates do not guarantee a linear decline in nonresponse bias. However, multiple contacts may attract already overrepresented groups and thus despite of additional field work effort nonresponse bias may remain on a stable level or even intensify. Consequently, researchers have to decide whether additional contact attempts actually pay off in terms of nonresponse bias reduction.
This paper is based on data from the European Social Survey (ESS), a biennial face-to-face survey in the general population of more than 30 participating countries. We used information from the contact forms on which the outcomes of each interviewer visit has been recorded. Previous analyses using socio-demographic variables (Fuchs et al., 2013) provided preliminary evidence that additional contact attempts generally increase response rates, but at the same time, also have the potential to increase nonresponse bias. In this paper, we assessed the effects of multiple contact attempts on nonresponse bias for a set of substantive variables. We tested whether and to what extent elevated response rates actually contribute to a reduction of nonresponse bias in attitudinal variables. Findings showed that in recent years, response rates have declined and additional contact attempts yield smaller increase in response rates. Although a higher number of contact attempts increases response rates which in turn result in lower nonresponse bias, the relationship between response rates and nonresponse bias is rather weak. The increase of response rates due to additional contact attempts and reduction of nonresponse bias due to increases in response rates are less pronounced in later contact attempts (from the fifth contact attempt onwards). Further analyses take a closer look at changes in the effort (number of contacted cases) required for a complete interview in the course of the first few contact attempts. Preliminary results suggest that the field work effort per complete increases dramatically in later contact attempts. The ultimate purpose of this presentation is a better understanding of the cost-benefit ratio of additional contact attempts and nonresponse bias reduction.

4. No-opinions in anti-immigration attitudes across European countries
Dr Aneta Piekut (Sheffield Methods Institute, University of Sheffield)

International surveys are unique sources of data to compare opinions and behaviours between countries, as well as assess validity of applied measurement tools. No-opinion answer options – ‘Don’t know’ – reduces the pressure of expressing non-existent attitudes among some respondents, yet, some still might feel uncomfortable revealing their lack of attitudes.

Taking European Social Survey wave 7 (2014/15) as an example, this paper will discuss the issue of comparability of used attitudinal measures by looking at the percentage of ‘Don’t knows’ to particular questions measuring anti-immigration attitudes. Specifically, I will consider whether the same people – in terms of socio-demographics (e.g. age, gender, education), experiences with difference (e.g. having contact with people of different ethnic background, living in ethnically mixed neighbourhoods) and other attitudes (e.g. satisfaction with governmental institutions, political trust) – express non-attitudes and how this patter differs across European countries.

The paper also explores a potential contribution of country-level differences in recent immigration streams and minority ethnic groups residing in particular countries to the variation in no-opinions. Finally, using classification techniques I develop typology of people depending on the type of no-opinion in attitudinal responses.

5. Survey response bias and gender ideology: A multi-level study on cross-country differences in gender-of-interviewer effects
Miss Dragana Stojmenovska (University of Amsterdam)
Dr Stephanie Steinmetz (University of Amsterdam)

The present study employs the European Social Survey (ESS) 2010 to investigate gender-of-interviewer effects on reported gender role attitudes across countries. The central question examined is: Are the said effects stronger in countries where gender issues are more salient?

While the study of interviewer-gender effects on reported gender ideology is not new, it has been subject to a number of limitations. First, research outside the United States has been strikingly scarce. The few exceptions are isolated studies in Australia, Mexico, Morocco, and the Netherlands. To the authors’ best of knowledge, the present study is the first Europe-wide study. Second, due to financial costs associated with conducting face-to-face surveys, most studies have employed telephone surveys instead. Others have made use of experiments, and only recently face-to-face surveys. The result has been a limited number of respondents and interviewers, mostly students, altogether decreasing the potential to generalize findings. The large, nationally representative samples of the European Social Survey show more promise in this respect.

Third, and most relevant for this study, past research has looked at interviewer-gender effects in isolation, within single countries. Provided that gender issues are not equally salient across countries, it is possible that this macro-context interacts with the micro-context, i.e., interviewer-respondent interaction, ultimately strengthening or weakening response effects. In societies where gender inequalities are more prominent, respondents may respond more strongly to interviewers’ gender. Only recently has this been acknowledged conceptually, and is yet to be examined empirically.

Answering the present question is important for understanding how such reporting bias might affect substantive conclusions, particularly in light of the increasing use of cross-country face-to-face surveys for comparing countries or ethnic group. If an effect of male interviewers eliciting less egalitarian responses, for instance, is stronger in countries with more inequality, one might find that respondents in these countries hold less-egalitarian views. In actuality, however, the said finding would (partly) be a function of contextual inequalities.

Based on social desirability and power relations theory – respectively – this study hypothesized that female interviewers elicit more egalitarian gender role attitudes than male interviewers, and that gender-of-interviewer effects are stronger for female respondents. Furthermore, we expected that female interviewers (as opposed to male interviewers) will elicit more egalitarian gender role attitudes especially in countries where gender issues are more salient, i.e., countries where gender inequalities are greater. Random effects models were tested to explain between-country variation in reported gender ideology. None of the hypotheses were confirmed, thereby bringing good news to the interviewer bias front.