ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Thursday 18th July 2013, 16:00 - 17:30, Room: No. 14

Latent variable modeling of survey (measurement) errors and multiple groups

Convenor Professor Joop Hox (Utrecht University)
Coordinator 1Professor Bengt Muthén (University of California, Los Angeles)

Session Details

Modern survey designs generally use complex sampling featuring cluster sampling, stratification and a diversity of weighting adjustments. These can be controlled using design based inference, or by using a model-based approach that includes such features explicitly in the analysis model. In addition to these complexities, comparative and longitudinal surveys need to control the measurement equivalence across groups or over time. Finally, the trend to use multimode data collections adds a relatively new source of survey error.

This session focuses on latent variable models for measurement errors in surveys that include other survey error components, with a goal towards estimation of population parameters of interest that are adjusted for a number of survey error components. Presentations can be on new models, new correction methods, new estimation techniques, or applications of such methods on existing survey data. An interesting aspect is the application of such models to data sets that can be viewed as problematic from an estimation point of view. One example is the analysis of measurement invariance with a large number of groups such as countries. A second example is multilevel analysis with a small number of groups or countries. Bayesian estimation methods may be attractive for such problems, and this session welcomes presentations that explore their possibilities.


Paper Details

1. Categorical Multiple-Group CFA as a Diagnostic Tool for Mode Effects on Random and Systematic Measurement Error

Mr Thomas Klausch (Utrecht University)
Professor Joop Hox (Utrecht University)
Dr Barry Schouten (Statistics Netherlands / Utrecht University)

Analyses of mode effects often compare the marginal distributions of variables across different modes of survey administration. Multiple-group confirmatory factor analysis (MCFA) offers additional insights to these analyses. It is well known that MCFA models can be used to assess item-specific scale bias (e.g. loadings, intercepts or thresholds) and differential random measurement error. It is less known that MCFA can also be used to study mode differences in systematic error. Systematic error is a source of heterogeneity on the level of individuals, which is equivalent for all items. Knowledge about the three types of mode effects is useful in practice, as demonstrated by an empirical application.
MCFA models were estimated on three scales surveyed during a large-scale mode experiment in The Netherlands (N=4,052 respondents assigned to face-to-face, telephone, mail or web). Since the scales were ordinal, categorical MCFA models were used. Furthermore, we explain that in experimental comparisons of modes it is necessary to control for selection bias using adjustment techniques such as propensity score weighting.
Our models showed that the interviewer and self-administered surveys exhibited differential extents of random and systematic error besides a small scale bias on thresholds of most items. This suggests that mode effects were primarily caused by individual level heterogeneity and not by the content of the respective items. Reasons for this heterogeneity include, for example, answering behaviors, such as socially desirable responding. Self-administration appeared more efficient in exhibiting smaller random error than interviewer administration.



2. Evaluating partial measurement invariance by examining its consequences for conclusions of interest

Dr Daniel Oberski (Tilburg University )

Invariance or "measurement equivalence" testing is often seen as a prerequisite for the comparison of groups, and performing tests of the equality of measurement parameters across groups is now standard practice in a wide range of research areas. The reasoning behind such tests is that substantive conclusions of interest might be affected if measurement parameters are not equal across groups.

We propose instead to directly examine the change in the substantive parameters of interest that would occur if misspecified invariance restrictions were freed. The o_a statistic is suggested for this purpose. Its accuracy as an approximation to the change in substantive parameters of interest when freeing an equality restriction turns out to be adequate. We apply the proposed procedure to a complex published study where invariance testing was performed. This leads to different conclusions than had been reached originally by the authors, demonstrating the usefulness of our approach.