ESRA 2019 Draft Programme at a Glance
Methods for Measurement Error Adjustment 

Session Organiser  Dr Stéphane Legleye (INSEE ; Inserm) 
Time  Friday 19th July, 13:00  14:00 
Room  D21 
This session includes papers that showcase different methods for adjusting for measurement error.
Keywords: survey error, correction methods, mixed mode data
Agregating mixmode survey data: a practical approach to neutralize measurement bias
Dr Stéphane Legleye (INSEE ; Inserm)  Presenting Author
Mr Gaël de Peretti (INSEE)
Mr Tiaray Razafindranovona (INSEE)
While the data collection mode effect is generally perceived as a nuisance, it can be beneficial and desired and help dictate the choice of a mixedmode protocol. Situations must be assessed on a casebycase basis, according to the purpose of the surveys, their place in observation systems, their public and research uses, and finally by the existence of series of measurements over time. After having defined the selection effect and the measurement effect, and recalled the main techniques to separate them, I will present some approaches recently defined to try to contain or even neutralize the measurement effect.
Finally, I will expose a pragmatic and parsimonious approach developed at the French National Institute for statistics and economic studies (INSEE), aiming to reduce measurement bias to the maximum when necessary. The method can be applied to all kind of mixedmode designs with a reference and an alternate data collection mode (defined by their measurement quality). It is based on the estimation of the measurement effect by classical means then on the imputation of only a subset of the observations of the alternative mode. The subset is defined in two steps: first, a matching technique (or a balance random sampling) selects a subset of the observations (imputation support); second, a random selection of the observations whose values are the farthest from their counterfactual value estimated in the imputation support (the proportion being defined to achieve equality of the outcomes in the imputed imputation support). Then the imputation (either deterministic or stochastic by the mean of multiple imputations) is performed. Efficiency, limits and comparison with calibrations are discussed.
Identifying and controlling for systematic measurement errors using LMS: Findings from a simulation study
Mr Christoph Giehl (TU Kaiserslautern)  Presenting Author
Systematic measurement errors like question order effects are not only biasing the response behaviour towards a specific item, but also measures of latent attitudes and data quality itself. If, for example, an assimilation effect of question order leads to enhanced covariations between subsequent items of an attitude scale, the latent mean of such an attitude, the factor loadings of the items, measures of model fit, and reliability measures will be systematically skewed due to systematic error correlations.
Analysing studies, determinants like the mode of information processing and the attitude accessibility (Mayerl/Giehl 2018) as well as the general attitude towards surveys (Stocké 2004) were identified in order to control systematic measurement errors. Those variables were then used within a latent moderation structural equation model (LMS) to control for interaction effects between subsequent error terms within a confirmatory factor analysis. Factor loadings, latent means and measures of data quality were thereby adjusted, leading to more unbiased measures.
In this presentation we introduce a structural equation model using multiple interactions on error term level (multiple interactions on error terms structural equation model, MIETSEM) based on data of a 2017 conducted experiment with students of the technical university Kaiserslautern, Germany. In addition, we present findings from a simulation study showing the benefits and limitations of MIETSEM in order to provide a tool for identifying and controlling for systematic measurement errors.
Mayerl, J.; Giehl, C. (2018): A Closer Look at Attitude Scales with Positive and Negative Items. Response Latency Perspectives on Measurement Quality. Survey Research Methods 12 (3), 9999  10016.
Stocké, V. (2004): Entstehungsbedingungen von Antwortverzerrungen durch soziale Erwünschtheit. Ein Vergleich der Prognosen der RationalChoice Theorie und des Modells der FrameSelektion. Zeitschrift für Soziologie 33, 303–320.
Measurement error adjustment of repeatedly measured dichotomous biomarkers using a Bayesian pattern mixture model with nonignorable missing data mechanism
Dr Thomas Klausch (Department of Epidemiology and Biostatistics, Amsterdam University Medical Centers)  Presenting Author
The problem of adjusting measurement error in dichotomous biomarkers is considered where benchmark measurements are only available for a subset of respondents. This data collection design emerges when two biomarker tests are available, where the more accurate test is expensive and the cheaper test is less accurate. Therefore, benchmark measurements are taken from a subsample only whereas the focal test is administered to all respondents. This problem applies to further settings in survey methodology, for example when adjusting measurement error using partially available register data or adjusting measurement effects in mixedmode surveys using reinterviews. We assume that that the missing data mechanism depends on benchmark measurements and thus is not ignorable (MNAR), a more plausible assumption than MAR. A patternmixture model for dichotomous outcomes is suggested which exploits MNAR restrictions on the conditional distribution of the focal measurements given benchmarks. A ML estimator and a Bayesian multiple imputation (MI) estimator are derived. In simulations the sensitivity and specificity of the focal test were varied from 0.6 to 0.9 and sample sizes of n=250 to n=1000 were considered. Estimating the prevalence of the benchmark outcome in the unobserved part of the data, both estimators had small bias, but the MI estimator clearly outperformed ML in terms of RMSE (max. RMSE for n=250: 0.04 for MI; 0.19 for ML) and confidence interval coverage of the true proportion. The methodology is demonstrated using human papillomavirus (HPV) tests in oropharyngeal cancer patients, collected at five European medical centers (n=372). An inexpensive test (p16stain) has higher error than an exact DNAtest with missing data (benchmark). Before error adjustment the HPVDNA prevalence MI estimate was 0.608 (95%CI: 0.547, 0.667), afterwards 0.561 (0.469, 0.654). The wider CI reflected increased uncertainty due to the error adjustment.