ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 09:00 - 10:30, Room: No. 22

Measurement in panel surveys: methodological issues 2

Convenor Ms Nicole Watson (University of Melbourne)
Coordinator 1Dr Noah Uhrig (University of Essex)

Session Details

All surveys are affected by measurement error to some degree. These errors may occur due to the interviewer, the respondent, the questions asked, the interview situation, data processing and other survey processes. Understanding measurement error is particularly important for panel surveys where the focus is on measuring change over time. Measurement error in this context can lead to a serious overstatement of change. Further, recall effects of events between two interviews may lead to serious understatements of change. Nevertheless, assessing the extent of measurement error is not straightforward and may involve unit record level comparison to external data sources, multiple measures within the same survey, multiple measures of the same individuals over time, or comparisons across similar cohorts who have had different survey experiences.

Session Details

This session seeks papers on the nature, causes and consequences of measurement error in panel data and methods to address it in either data collection or data analysis. This might include (but is not limited to):
- Assessments of the nature and causes of measurement error
- Evaluations of survey design features to minimise measurement error (such as dependent interviewing)
- Examinations of the consequences of measurement error
- Methods to address measurement error in analysis.


Paper Details

1. Measuring employment in panel surveys: A comparison of reliability estimates in HILDA and BHPS

Dr Sc Noah Uhrig (Institute for Social & Economic Research)
Ms Nicole Watson (The Melbourne Institute for Applied Economic and Social Research)

An important use of large household panel surveys is an examination of inequality dynamics in society. An example might be how research on discrimination in employment focuses on sex or race differences in wages, job quality, mobility chances or status outcomes. A well known problem, however, is how random measurement error can lead to attenuation bias in observed substantive coefficients. Moreover, there is mixed evidence on how panel conditioning might affect measures as panels age. One approach to assessing changing data quality in a panel context is to estimate the reliability of variables using quasi-simplex Markov models initially formulated by Heise (1969) and Wiley and Wiley (1970). This approach relies on panel data with at least three time-points to estimate reliabilities from a measurement model incorporating latent true values. Comparing data from the Household, Income and Labour Dynamics in Australia Survey and the British Household Panel Survey, our research addresses the questions of whether and under what conditions the reliability of core employment measures change over time. We further examine whether change in reliability is related to a number of covariates including sex, age and education. We conclude with a discussion of how reliability assessments may affect substantive research using panel data, including cross-country comparisons, and whether calculating and publishing reliabilities may be a desirable feature of a panel data quality profiling exercise.


2. Do branched rating scales have better test-retest reliability than unbranched scales? Experimental evidence from a three-wave panel survey

Miss Emily Gilbert (ISER, University of Essex)

The use of 'branched' formats for rating scales is becoming more widespread because of a belief that this format yields data that are more valid and reliable. Using this approach, the respondent is first asked about the direction of his or her attitude/belief and then, using a second question, about the intensity of that attitude/belief (Krosnick and Berent, 1993). The rationale for this procedure is that cognitive burden is reduced, leading to a higher probability of respondent engagement and superior quality data. Although this approach has been adopted recently by some major studies, notably the ANES, the empirical evidence for the presumed advantages in terms of data quality is actually quite meagre. Given that using branching may involve trading off increased interview administration time for enhanced data quality, it is important that the gains are worthwhile. This paper uses data from an experiment embedded across three waves of a national face-to-face probability-based panel survey in the UK (the Innovation Panel from the 'Understanding Society' Survey). Each respondent was interviewed once per year between 2009 and 2011. We capitalise on this repeated measures design to fit a series of models which compare test-retest reliability, and range of other indices, for branched and unbranched question forms, using both single items and multi-item scales. We present the results of our empirical investigation and offer some conclusions about the pros and cons of branching


3. Hedonic Price Models with Omitted Variables and Measurement Errors: A Constrained Autoregression - Structural Equation Modeling Approach with Application to Urban Indonesia

Mr Yusep Suparman (Universitas Padjadjaran, Indonesia)
Professor Henk Folmer (Groningen University, the Netherlands)
Dr Johan Oud (Radboud University Nijmegen, the Netherlands)

Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper we develop a constrained autoregression - structural equation model (ASEM) to handle both types of problems. Standard panel data models routinely handle omitted variables bias when the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. For comparison, we also present estimation results by a standard SEM (i.e. without accounting for omitted variables), by constrained autoregression without accounting for measurement error, and by a standard hedonic model which ignores both measurement error and omitted variables.




4. Measuring real change or something else? Mechanisms and consequences of panel conditioning in a short-term campaign panel

Mr Michael Bergmann (University of Mannheim)

While longitudinal surveys provide substantial information about intra-individual change, they also present several methodological difficulties. Among these, the question [...] whether repeated interviews are likely, in themselves, to influence a respondent's opinions" (Lazarsfeld 1940: 128) still lacks thorough understanding. Findings from previous research more or less suffer from methodological as well as theoretical inadequateness and frequently are ambiguous in detail. Therefore, it is essential to examine systematically the underlying mechanisms for the varying evidence of conditioning effects. In this respect, the German Longitudinal Election Study offers a unique database as it contains a seven-wave campaign panel with parallel cross-sections.
The two most salient difficulties associated with panel conditioning in longitudinal surveys are the separation of attrition effects as well as changes in the population from conditioning effects. I employ a procedure, which adjusts the panel waves by using the respective cross-section as a reference and thus ensures that treatment (panel respondents) and control group (cross-section respondents) have similar distributions of relevant characteristics. Under this directive the cross-sections serve as a baseline to explore whether responses given by a person, who has already taken part in the panel study, differ from responses that would have been given without previous participation. I direct particular attention to the mechanisms of attitude formation and change underlying conditioning effects as one component of measurement error in panel surveys. The results show that repeated interviews have substantial consequences on the respondents' attitude accessibility as well as on their voting intention.