ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Friday 21st July, 11:00 - 12:30 Room: N 101


Meta-Analysis in Survey Methodology

Chair Professor Michael Bosnjak (GESIS – Leibniz Institute for the Social Sciences )
Coordinator 1Professor Katja Lozar-Manfreda (University of Ljubljana, Faculty of Social Sciences)

Session Details

In a nutshell, meta-analysis can be described as a set of statistical methods for aggregating, summarizing, and drawing inferences from collections of thematically related studies. The key idea is to quantify the size, direction, and/or strength of an effect, and to cancel out sampling errors associated with individual studies. Meta-analytic techniques have become the standard methods for aggregating the results from thematically related studies in the health and behavioural sciences. They can be used to describe a research field, to test and/or compare theories on a high level of abstraction, and to derive conclusions about the effectiveness of interventions.
Despite the exponentially growing amount of primary studies in survey methodology, the use of meta-analysis to synthesize this body of knowledge is scarce. Only about 40 meta-analyses on survey methodology topics do currently exist, which equals the annual output of meta-analyses in top tier journals in the health and behavioural sciences. The few famous and often cited meta-analysis cover issues such as survey (non)response, validity/reliability of scales, and survey measurement of specific concepts etc.
The overall aim of this session is to promote the use of meta-analysis in survey methodology by encouraging authors to (a) contribute papers on methodological advances and tools in the area of meta-analysis relevant for survey methodology and (b) to present most recent meta-analytic findings in the area of, or relevant for, survey methodology. Authors are specifically encouraged to submit meta-analyses on the determinants of survey representativeness and/or about explaining survey errors and biases.

Paper Details

1. Meta-analysis in Survey Methodology
Mr Gregor Čehovin (Faculty of Social Sciences, University of Ljubljana)
Professor Michael Bosnjak (GESIS – Leibniz-Institute for the Social Sciences)
Professor Katja Lozar Manfreda (Faculty of Social Sciences, University of Ljubljana)

Relevance: Several meta-analyses already exist in different thematic areas of survey methodology and are of large importance, based on their high citation and their generalization of evidence. However, there is lack of systematization in the selection of topics and, to our knowledge, existing meta-analyses in survey methodology tend to adopt methods that were primarily developed for other fields, such as psychology, medicine and pharmacy, education, and ecology. An approach for stimulating the use of meta-analysis in survey methodology is not present, which calls for additional attention. Our paper presents a systematic review of meta-analyses that were previously conducted in survey methodology. Objectives are to 1) systematically identify previous meta-analyses in survey methodology, 2) classify them according to addressed thematic areas, 3) identify gaps in research, 4) analyze approaches and quality of meta-analyses, and 5) investigate avenues for additional meta-analyses in survey methodology and approaches to increase their quality.

Methods: Our data are based on a systematic search that was finalized in October 2016 and has identified 36 eligible meta-analysis articles from 135 database sources. We structure the effect sizes of identified meta-analyses according to the 7 main categories of Total Survey Error (TSE). In addition, we classify the intervention and moderator variables of the identified meta-analyses according to the characteristics of primary studies. This enables us to identify which research problems and questions could have been addressed in principle in the observed meta-analyses and which were actually addressed. In addition, we are interested in the approaches used by survey methodologists, as well as the quality of the performed meta-analyses. For this purpose, we extract data from meta-analyses to summarize how the respective research questions are translated into inclusion criteria, how the procedures of identifying and selecting primary studies are performed and reported, which data on effect sizes and potential moderators are extracted, which analysis procedures are pursued, and how the findings are interpreted.

Results: Preliminary results show that thematic areas of meta-analyses cover only two TSE categories: measurement error and nonresponse error. On the other hand, the thematic areas that can be structured into the rest of the TSE categories (validity, processing error, coverage error, sampling error, adjustment error) are not yet covered by the existing meta-analyses in survey methodology. As regards the approaches and quality of existing meta-analyses there are differences in following reporting standards. This is because some meta-analyses follow them less meticulously or were conducted before such standards were elaborated.

Added value: The main contribution of our paper is to systematically present the state of the art of meta-analyses in survey methodology, identify gaps in research, and elaborate on the quality of these meta-analyses. We aim to further popularize the method of meta-analysis in survey methodology, which has evolved as a well-respected approach for deriving evidence-based conclusions about causal and correlational relationships in social, behavioral, health and economic sciences.


2. Are Web Surveys More Successful in the US? A Multilevel Meta-Analysis Investigating Web Response Rate Experiments across Countries
Miss Jessica Wengrzik (GESIS, Mannheim, Germany)
Mrs Katja Hanke (GESIS, Mannheim, Germany)
Mr Ronald Fischer (University of Wellington, New Zealand)
Mr Michael Bosnjak (GESIS, Mannheim, Germany)

A long-standing discussion in survey research is which surveys modes (web versus telephone, web versus mail, web versus face-to-face) would retain a high response rate and appropriate data-quality. In this study, we applied a 3-level v-known meta-analysis in order to examine the impact of external variables at the country level that predict differences in choosing web-based surveys over other modes. In total, we had 113 effect sizes (defined as the response rate difference: web surveys subtracted by other modes) reported in 97 studies in 10 countries. We hypothesized that following country-level indicators predict differences in our effect size. As cultural dimensions, we added Hofstede’s Individualism and Uncertainty-Avoidance dimension. We hypothesized that Individualism negatively associates with response rate difference and Uncertainty-Avoidance positively associated with response rate difference. As country-level indicators related to technology, we added internet usage and telephone usage. We predict a negative association between internet usage and response rate difference and a positive association with telephone usage. Regarding demographic variables at the country-level, we added population ages 65 years and older as a positive indicator for response rate difference. In addition we expected a positive relationship between population density, GDP and the acceptance of the web mode.
Using a multilevel approach, we aim to explain 20% unexplained variability on the country level. Furthermore we robustly showed that the response rate difference is indeed dependent on cultural dimensions as we found individualism to be significantly predicting response rate differences in the expected direction. Furthermore, we could show that population density and GDP significantly influence the response rate difference, while Hofstede’s uncertainty avoidance, internet and telephone usage and an older population did not influence the response rate difference. Implications for the application of different modes to optimize response rates are discussed.


3. How to Improve Data Quality with Interviewer training? A Meta-Analytical Approach
Mrs Jessica Wengrzik (GESIS, Mannheim, Germany)
Professor Michael Bosnjak (GESIS, Mannheim, Germany)

Relevance & Research Question: The aim of this meta-analysis is to (1) explore whether interviewer training can significantly improve interviewee cooperation, the data and paradata quality and (2) which approaches used to train survey interviewers have been most successful?
Herewith we focus on the role of interviewer training to improve cooperation rates, interviewer error rates, unit nonresponse rates, socially desired behaviour as well as interviewer reliability and accuracy of respondent rating.
Telephone and face-to-face surveys claimed in 2016 more than half of all surveys worldwide (ESOMAR, 2016). Those surveys served in many cases as political and economic decision-making basis. However many authors showed that there is a strong link between interviewer qualification and data quality (Billiet 1988; Dahlhamer 2010; Olson 2007). An adequate way to qualify interviewers is via interviewer training. Many huge survey projects as PIAAC (PIAAC, 2014) or the ESS (Loosveldt et al., 2014) expect well trained interviewers and survey institutes provide trained interviewers. But what characteristics a successful training involves is still a “black box”.
Interviewer training approaches are heterogenous in method, content and lengths. This meta-analysis aims to answer the question which interviewer training characteristics make a training successful to advance advice for optimizing interviewer trainings to increase data and paradata quality? Concrete research questions are: How long should a successful interviewer training take? How much can Anti-Refusal-Training improve the interviewee cooperation? Which role play practice and feedback versus instruction only? Is online training as successful as training on-site? Does it make a difference training face-to-face or telephone interviewer?


Methods and Data: We identified almost 40 interviewer training experiments for this meta-analysis. Our effect size is data and paradata quality operationalized through completion rate (15 studies), interviewer error (12 studies), unit nonresponse rate (8 studies), social desired behaviour, interviewer reliability & accuracy.


Added Value: The results of this meta-analysis advance advice for optimizing interviewer trainings to increase data and paradata quality.


4. A meta-analysis on the impact of survey questions’ format on measurements’ quality
Miss Anna DeCastellarnau (Tilburg University / European Social Survey - Universitat Pompeu Fabra)

Survey questions are commonly used in social and behavioral science. When designing a survey question, researchers need to make many different decisions about its format to come up with the final question. For instance, decisions need to be made regarding the type of formulation of the request, the use of instructions, the number of answer options, the kind of verbal labels to use or the visual layout.

Although the impact of different survey questions’ formats on measurements’ quality has been studied in the literature, it is often complex to extract questionnaire design best practices. Conclusions from a single experiment on a set of survey questions using specific variations in its format, cannot easily be extrapolated to other surveys or questions.

Few studies to date have focused on assessing the impact of each of the design features of survey questions over a wide range of experimental data. Following previous research, I conduct a meta-analysis of Multitrait-Multimethod (MTMM) experimental studies.

In this meta-analytic study, measurement quality is used as dependent variable. Estimates of measurement quality of each of the experimental survey measures are obtained from the analysis of MTMM structural equation models. Moreover, the design features of each of the experimental survey questions are used as independent variables. The set of design features considered are obtained from a literature review on questionnaire and survey methodology.

Thus, compared to previous studies, this meta-analysis provides evidence, on the one hand, including new experimental data and, on the other hand, including a larger variety of decisions about the type response scale features to use.