ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 11:00 - 12:30, Room: No. 22

Measurement in panel surveys: methodological issues 3

Convenor Ms Nicole Watson (University of Melbourne)
Coordinator 1Dr Noah Uhrig (University of Essex)

Session Details

All surveys are affected by measurement error to some degree. These errors may occur due to the interviewer, the respondent, the questions asked, the interview situation, data processing and other survey processes. Understanding measurement error is particularly important for panel surveys where the focus is on measuring change over time. Measurement error in this context can lead to a serious overstatement of change. Further, recall effects of events between two interviews may lead to serious understatements of change. Nevertheless, assessing the extent of measurement error is not straightforward and may involve unit record level comparison to external data sources, multiple measures within the same survey, multiple measures of the same individuals over time, or comparisons across similar cohorts who have had different survey experiences.

Session Details

This session seeks papers on the nature, causes and consequences of measurement error in panel data and methods to address it in either data collection or data analysis. This might include (but is not limited to):
- Assessments of the nature and causes of measurement error
- Evaluations of survey design features to minimise measurement error (such as dependent interviewing)
- Examinations of the consequences of measurement error
- Methods to address measurement error in analysis.


Paper Details

1. Do we need the "neutral weather survey"? Measurement effects of the weather at interview day

Dr Claudia Schmiedeberg (Ludwig-Maximilians-University Munich)
Dr Jette Schröder (GESIS - Leibniz Institute for the Social Sciences für Sozialwissenschaften)

Since the beginning of the nineties, a number of studies indicate that weather conditions at interview day can have an effect on measurement, in particular regarding life satisfaction. In their seminal paper, Schwarz and Clore (1993) show higher reported life satisfaction for sunny days, a finding which is replicated recently by Kämpfer and Mutz (2011).
Measurement errors induced by weather conditions could lead to difficulties in several types of studies: In cross-sectional analyses they would cause an overestimation of regional differences. Trend-studies would suffer if in one year the field period coincided with a period of fine weather. In panel-studies weather effects would inevitably result in an overestimation of change, since the weather at interview day is bound to differ over the panel waves for the individual respondent.
However, it doesn't seem evident that these measurement errors exist. One shortcoming of the studies mentioned above is that they are based on relatively small samples (from a few dozen up to 200 cases). We use the data of the German Family panel (pairfam) and local weather data for every respondent to investigate whether weather effects on satisfaction measures can be replicated with a large sample (about 12.000 respondents). Additionally to the cross-sectional analysis we estimate fixed-effects-regressions to model the effect of weather on individual changes in satisfaction over time.


2. Quantifying the development of agreement among experts in Delphi studies

Mr Jurian Meijering (Wageningen University)
Dr Jarl Kampen (Wageningen University)
Dr Hilde Tobi (Wageningen University)

Background:
A Delphi study is a survey administered to a panel of experts in a number of rounds over a period of time. Usually, the aim is to achieve agreement and to identify common grounds among experts on a specific topic. Different indices for measuring agreement among experts exist, but are rarely reported in Delphi studies.

Objective:
The objective of this study was to find out how different indices behave within and across the rounds of a Delphi study.

Method:
Different Delphi scenarios were simulated by systematically varying the number of objects to be rated, the number of experts, the distribution of object ratings, and the conformity level (indicating the extent to which the ratings of the experts change in the direction of the group opinion in the previous round). Each scenario consisted of three rounds and was replicated 1000 times. With each replication, the level of agreement within each round was calculated using the different indices.

Results:
Results showed that different indices, although based on the same data, suggested different levels of agreement.

Conclusion:
Researchers should decide a priori on the agreement index within their Delphi study. Furthermore, researchers are advised to report the value of the chosen index within every Delphi round as to provide insight into how the agreement level developed across Delphi rounds.