ESRA 2017 Programme
|ESRA Conference App|
Thursday 20th July, 11:00 - 12:30 Room: Q4 ANF2
Advancements in Adjusting for Measurement Error in Statistical Models
|Chair||Dr Malcolm Fairbrother (University of Bristol )|
|Coordinator 1||Dr Diana Zavala Rojas (Universitat Pompeu Fabra)|
Session DetailsSocial and political surveys measure social attitudes, political opinions, preferences and behaviours. Yet the measurement of such phenomena is never completely precise: it is inevitably subject to measurement error. Yet statistical models in published studies often ignore such error, at the risk of producing substantially biased (often attenuated) results.
Despite the development and refinement of techniques capable of adjusting for measurement error, these techniques are generally ignored in applied work, in part because many applied researchers do not know about them, or even about the consequences of measurement error generally.
This session is twofold: we invite both methodological papers dealing with techniques for adjusting for measurement error and papers that present interesting applications making use of such methods.
From a methodological perspective, we are particularly keen to include papers that will, in some way, encourage applied researchers to adjust for measurement error--such as by presenting, validating, or demonstrating techniques that are accessible to non-specialists. Papers may touch on topics such as:
· correction for measurement error in hierarchical models;
· measurement error in categorical data;
· procedures to estimate measurement error;
· adjustment for measurement error in generalized linear models;
· correction for measurement error using latent class models and structural equation modelling.
From an applied perspective we are interested in presentations from diverse areas of the social and political sciences that use survey data and adjust for measurement error in their analyses.
Paper Details1. The multi-trait multi-error approach to estimating measurement error
Dr Alexandru Cernat (University of Manchester)
Dr Daniel Oberski (University of Tilburg)
Measurement error is a pervasive issue in surveys. One of the most common approaches used to measure and correct for systematic errors in this context is the Multi-Trait Multi-Method approach. Thus, it is possible to separate method, random error and “true” score using an experimental design that combines multiple traits (i.e. questions) with multiple methods (i.e. answer scales). As with other statistical approaches that tackle measurement error the results of this model are biased if any other types of systematic error (such as social desirability) are
present. In this paper we present an extension of the MTMM model, which we name the Multi-Trait Multi-Error, that manipulates multiple characteristics of the question format using a within factorial design. Thus, it is be possible to estimate simultaneously: social desirability, acquiescence, method, random error and "true" score. We will illustrate how to implement the design and show initial results using measures of attitudes towards immigration in the 7th wave of the Understanding Society Innovation Sample.
2. Mass Public Decisions to Promote Democracy: the Role of Foreign Policy Dispositions
Ms Lala Muradova (Universitat Pompeu Fabra; KU Leuven)
A large body of scholarship contends that citizens use coherent belief systems to decide upon specific foreign policy issues. While there is an abundance of evidence of their consistent influence on citizens’ decisions about the war, few studies examined if citizens' certain dispositions anchor other foreign policy measures. By using uniquely designed survey data among 611 voting age American citizens, this study examines the role of three dimensional belief systems (militarism vs. accommodation, internationalism vs. isolationism, and person’s ideological belief system) in accounting for mass support for economic sanctions. First, we examine the effect of predispositions on citizens' willingness to impose economic sanctions on an autocracy without correcting for measurement errors. However cognizant of the fact that the variables we are examining can substantially differ from the ones we intend to measure in survey data and this could eventually bias our conclusions (Saris & Revilla, 2015), we consequently correct for measurement errors. We do so consistent with Saris & Gallhofer (2014) and DeCastellarnau & Saris (2014) using the SPQ 2.1. and Alwin (2007) quality predictors. In surprisingly sharp contrast to our preliminary findings, after correction for measurement errors we observe strongly determining effect of dispositional variables on citizens’ willingness to impose economic sanctions. For example, findings show that those citizens of militarist views are less supportive of economic sanctions than non-militarists, which runs counter to an argument on the positive relation between militarism and support for war in prior studies. Contrary to preliminary findings showing no association between political ideology and willingness to sanction, after correcting we find that liberalism-conservatism identification scale strongly predicts the support: the more conservative the person is, the more willing she is to impose sanctions. We further explore the interactive effects of dispositional variables on willingness to impose sanctions with a series of ANOVAs.
3. Latent classes analysis to detect social desirability answering patterns: An application to the 7th round of the European Social Survey.
Dr Caroline Vandenplas (KULeuven)
Dr Alexandru Cernat (University of Manchester)
In 2014, Mneimneh et al. proposed to use mixed Rash models to detect social desirability answering patterns. The mixed Rash models combine IRT models with latent classes that differentiate answering patterns. Their results show that a model with two classes fits the data best and that the nature of the classes is social desirability through validation with the survey mode, the presence of a third party and some social conformity items. Our aim is to apply their technique to detect social desirability answering patterns in round 7of the European Social Survey. First, we repeat their approach on three constructs separately: effect of immigration for country, allowing people to come in country and social connections, using a confirmatory factor analysis. We then extend the analysis by (1) using a confirmatory approach introducing constrains between the latent classes, (2) combining different constructs in one model, and (3) comparing results in Belgium and Great Britain. In contrast with the Mneimneh et al. (2014), the confirmatory models with two latent classes do not have the best model fit but the fit improves when allowing more classes. However, based on their results and on our conviction that respondents tend to give or not social desirable answers, we stick to the model with two classes; inferring that more classes lead to a better fit due to other sources of measurement errors. Validation with the presence of a third person, the respondent’s reluctance to give answers and personality traits are not systematically in line with our expectations. Yet, considering the confirmatory factor model with constrains leads to expected relations with the validating variables in both countries. The results show that constraining the model or considering more than one construct ensure that the latent classes detect social desirability and no other answering patterns.
4. Correcting for Measurement Error in Multilevel Models
Dr Malcolm Fairbrother (University of Bristol)
Dr Diana Zavala-Rojas (Universitat Pompeu Fabra)
Comparative cross-national surveys are measurement instruments for social attitudes, public opinion, policy preferences, and political behaviours. Such surveys yield datasets that allow for valuable comparisons across societies, and for analyses with multilevel models nesting individuals within their countries of residence.
Such analyses have typically ignored the problem of measurement error in right-hand side variables, even though the measurement of social phenomena is known not to be completely precise. Rather, it has an intrinsic component of measurement error. For example, cultural specificities can shape respondents' reactions to elements of the measurement method--i.e., the combination of characteristics that define the formulation and administration of the request, including the response scale, the mode of data collection, the use of showcards or visual aids, the translation procedure, the selection and assignment of languages, the introduction, the additional explanations, among others.
When information about the measurement quality is available, it is possible to apply statistical techniques to correct for measurement error. In this paper we present a procedure for error correction in hierarchical models. We use measurement error as predicted by the Survey Quality Predictor software, incorporating it as prior information for the variance distribution of the parameters of interest. In this paper, we describe the statistical model; validate its robustness using simulated data; and present two analyses replicating previously published articles, but including correction for measurement error. We estimate the models in a Bayesian framework, using JAGS run from within R.