ESRA logo

ESRA 2019 glance program


Challenges in Repeated Survey Measurements

Session Organisers Mr Tobias Rettig (University of Mannheim)
Mrs Hannah Schwarz (RECSM-Universitat Pompeu Fabra)
Dr Jan Karem Höhne (University of Mannheim; RECSM-Universitat Pompeu Fabra)
TimeWednesday 17th July, 09:00 - 10:30
Room D33

Measuring attitudes, opinions, or behaviors of respondents is a very widespread strategy in sociology, political science, economics, and psychology to investigate a variety of individual and social phenomena. Thereby, researchers are frequently not only interested in measuring attitudes, opinions, and behaviors at one single point in time, but over time to investigate how they change and develop. Repeated survey measurements are also of great importance in methodological research to evaluate the measurement quality (i.e., reliability and validity) associated with different question formats and/or survey modes. However, one special problem linked to measurement repetitions in surveys is that subsequent measurements are not independent of prior measurements. For instance, respondents might remember their already given answers, which can affect the parameter estimations.

We invite contributions that are based on experimental, quasi-experimental, and observational study designs investigating the challenges in repeated survey measurements. This includes effects on respondents’ response and completion behavior as well as appropriate strategies to avoid, reduce, or correct for measurement error.

For this session, we especially welcome contributions on the following research areas:
- Cognitive response processes (e.g., information retrieval and memory effects)
- Future perspectives and developments (e.g., web and mobile web surveys)
- Measurement quality (i.e., reliability and validity)
- Survey experience (e.g., trained respondents)
- Survey mode (e.g., web and telephone surveys)
- Replications of empirical studies and findings
- Statistical approaches (e.g., error estimation and correction)
- Theoretical considerations on repeated survey measurements

Keywords: measurement error, measurement quality, memory effects, repeated survey measurements

Memory Effects in Repeated Survey Questions – Reviving the Empirical Investigation of the Independent Measurements Assumption

Ms Hannah Schwarz (University Pompeu Fabra - RECSM) - Presenting Author
Dr Melanie Revilla (University Pompeu Fabra - RECSM)

It is common to repeat survey questions in the social sciences, for example to estimate test-retest reliability or in pretest-posttest experimental designs. An underlying assumption is that the repetition of questions leads to independent measurements. Critics point to respondents’ memory as a source of bias for the resulting estimates. Yet there is little empirical evidence showing how large memory effects are within the same survey and none showing whether memory effects can be decreased through purposeful intervention during a survey. We aim to address both of these points based on data from a lab-based web survey containing an experiment. We repeated one of the initial questions at the end of the survey (around 129 items later) and asked respondents if they recall their previous answer and to reproduce it. Furthermore, we compared respondents’ memory of previously given responses between two experimental groups: A control group, where regular survey questions were asked in between repetitions and a treatment group which, additionally, received a memory interference task aimed at decreasing memory. We found that, after an average 20-minute interval, 60% of the respondents were able to correctly reproduce their previous answer, of which we estimated 17% to do so due to memory. We did not observe a decrease in memory as time intervals between repetitions become longer. This indicates a serious challenge to using repeated questions within the same survey. Moreover, the tested memory interference task did not reduce respondents’ recall of their previously given answer or the memory effect.


Investigating Respondents’ Ability to Recall Previous Responses to Different Types of Questions in a Probability-Based Online Panel

Mr Tobias Rettig (University of Mannheim) - Presenting Author
Dr Jan Karem Höhne (University of Mannheim; RECSM-Universitat Pompeu Fabra)
Professor Annelies Blom (University of Mannheim)

Measuring attitudes, behaviors, and beliefs over time is an important strategy to draw conclusions about societal developments. The use of longitudinal study designs is also important to evaluate measurement quality (i.e., reliability and validity) of data collection methods. However, a concern associated with repeated survey measurements is that memory effects can affect the precision of parameter estimations. So far, there is only a small body of research investigating respondents’ ability to recall previous responses. We therefore investigate respondents’ ability to recall their responses to previous questions by varying question types and the time between a question and its repetition.
This study is conducted in the German Internet Panel – a probability-based online panel representative of the German population – in the November 2018 wave and the subsequent January 2019 wave. To evaluate respondents’ recall ability, we use an experimental design defined by question type (i.e., attitude, behavior, and belief) and in-between time (i.e., about 20 min to about two months). Furthermore, we employ follow-up questions asking whether respondents can recall their previous response, what their previous response was, and how confident they are about recalling their previous response.
The preliminary data of the November 2018 wave indicate that after about 20 minutes, about 80% of the respondents report that they can recall their previous response and about 60% of them recall it correctly. Interestingly, respondents are more likely to correctly recall their response to behavior questions than to attitude or belief questions. In addition, respondents who give extreme responses are more likely to correctly recall their previous response.
Our initial findings indicate that a large number of respondents can recall their previous responses, irrespective the question type. Thus, the precision of parameter estimations is a serious concern in studies with repeated survey measurements.


Do You Know What you Did Last Summer? How Time Intervals between Panel Waves Affect the Reporting of Less Salient Events

Professor Reinhard Pollak (WZB Berlin Social Science Center and Freie Universität Berlin)
Dr Wiebke Schulz (WZB Berlin Social Science Center)
Mr Hans Gerhardt (WZB Berlin Social Science Center and Humboldt-Universität zu Berlin) - Presenting Author

In panel studies, we often want to find out what participants did between two panel waves. More specifically, we are interested in events that happened since the last panel interview. For crucial events, like the birth of a child, it is cognitively easy for respondents to report such events. For less salient events, however, it is cognitively more demanding to report a valid answer. In our paper, we analyze how often respondents report participation in further training activities since the last interview. Using data from the German National Educational Panel Study (NEPS), we first analyze whether temporary drop-outs of the annual surveys report less further training activities in the next wave – correcting for the selection bias of this group. Second, we take advantage of the fact that respondents are not interviewed in the exact same month year by year. We show how time elapsed since the last interview affects the amount of training activities reported for those who participated year by year. Third, we make use of the fact that further training includes formal, nonformal, and informal training activities. These vary by duration, effort, and frequency. By doing so, we test whether more salient events like formal further training are less prone to recall errors than less salient events like nonformal further training and, in particular, informal further training.
Our first results show that respondents underreport further training activities to a large extent if more time has elapsed since the last interview. Temporary drop-outs as well as variation in the field-work bias the reported number of further training activities. From a panel study perspective, these effects may invalidate longitudinal analyses of training participation and returns to training participation. We show how context-related questions on further training reduce this bias and we discuss alternative study designs that take underreporting of less salient events into account.


Survey Experience and its Impact on Response Behaviour in Panel Surveys: Evidence from the GESIS Panel

Mrs Evangelia Kartsounidou (Aristotle University of Thessaloniki) - Presenting Author
Mrs Rebekka Kluge (GESIS – Leibniz Institute for the Social Sciences)
Dr Henning Silber (GESIS – Leibniz Institute for the Social Sciences)
Dr Tobias Gummer (GESIS – Leibniz Institute for the Social Sciences)

Download presentation

In panel surveys, respondents are asked to answer the same questions repeatedly to enable analyses of change in attitudes and behaviour. However, these surveys are often challenged with attrition and panel conditioning effects (Lynn, 2009). Especially, conditioning is threatening the substantive conclusions drawn from panel data. Changes in substantive measures that are obtained from these data may be the result of respondents changing their behaviour because of participating in the survey (i.e., the survey experience). However, previous research on learning effects (Wright, 1936; Yelle, 1979) suggests that people gain task-related skills from repeatedly performing an action. With respect to answering surveys, repeated participation could increase the ability of the respondents to fill in a questionnaire, and hence the time needed to provide an answer. It seems reasonable to expect that it becomes less burdensome for respondents to complete surveys, the more experience they gain. This would suggest positive effects on response quality and time respondents need to complete a survey. Unfortunately, our knowledge with respect to the effects of the repeated survey experiences in panel surveys on response behaviour is still limited.
The main aim of our study is to explore how repeated participation in a panel influences response behaviour, adding to this sparse knowledge on learning effects and shed light on the competing assumptions regarding survey experience. In our analyses, we investigate whether survey experience is associated with response times, emphasising the importance of learning effects and whether survey experience further influences response quality in a panel survey. We base our analyses on the GESIS Panel, a probability-based panel in Germany with 25 waves, which gives us the opportunity to monitor learning effects within respondents across many waves and observe their impact on response behavior. To measure response quality, we use a wide range of indicators, such as non-differentiation, non-substantive responses, and speeding.


Understanding Change in Time of Measurement Error Using Longitudinal Multitrait Multierror

Dr Alexandru Cernat (Unviersity of Manchester) - Presenting Author
Dr Daniel Oberski (University of Utrecht)

Longitudinal data offer the unique opportunity to investigate change in time and its cause. While this type of data is getting more popular there is limited knowledge regarding the measurement errors involved, their stability in time and how they could bias estimates of change. In this paper we will propose a new method to estimate multiple types of measurement error concurrently which we call the multitrait multierror. This method uses a combination of experimental design with latent variable modelling to disentangle random error, social desirability, acquiescence and method effect. Using data collection from the Innovation Panel in the UK we investigate the stability of these measurement errors and their impact on estimates of change. Initial results show that while social desirability exhibits very high stability this is very low for method effects.