All time references are in CEST
Mixing Modes in Longitudinal Surveys 2 |
|
Session Organisers | Professor Mark Trappmann (Institute for Employment Research, University of Bamberg) Dr Mary Beth Ofstedal (Institute for Social Research, University of Michigan) |
Time | Tuesday 18 July, 14:00 - 15:30 |
Room | U6-28 |
The Covid-19 pandemic has forced many panel and cohort surveys to replace personal interviews by telephone or self-administered modes and thereby accelerated a trend of mixing modes in longitudinal surveys. While introducing new data collection modes helped prevent attrition or even the loss of entire survey waves during the pandemic, it also created new challenges for longitudinal surveys related to mode effects on survey measurement.
Many of the challenges presented by mode effects and the methodological tools for investigating and adjusting them differ between longitudinal and cross-sectional surveys. On the one hand, in longitudinal surveys the potential for harm is substantial. Even small mode-effects can dramatically affect estimates of change if the trait under investigation is relatively stable over time. On the other hand, longitudinal surveys allow exploiting within-subject variation and thus the application of more stringent methods to separate (self-)selection into mode from mode effects on measurement.
We invite submissions of research that investigate mode effects in a longitudinal setting. This may include mode experiments, analyses that separate selection effects from measurement effects, or approaches to separate mode effects from time trends and particularly from effects of the pandemic. We also invite contributions that address the impact of mode effects for longitudinal estimates and that offer solutions for communicating to users the importance of recognizing the potential for mode effects and how to deal with them in their research.
Keywords: data collection mode, mixed-mode, longitudinal surveys, mode effects
Mrs Nerdit Stein (ICBS - Israel Central Bureau of Statistics ) - Presenting Author
The upheaval created by the COVID-19 pandemic impacted data collection in field surveys. Maintaining social distance directly affected field interviews throughout Wave 8 of the Longitudinal Survey, and data collection transitioned to the telephone.
We examined the change in the method of interviewing from the field to the telephone on two levels: 1) the rates of interviewing, and 2) the quality of the data received.
1. Scope of interviewing
Examining the distribution of field interviewing versus telephone interviewing during Wave 8 by the months of interviewing showed that telephone interviews increased during the second lockdown in Israel (September–October 2020) to 61%-63% of all interviews, compared to an average of 40%–45% in other months. The effect of the third lockdown (January–February 2021) was less noticeable – the telephone interviews were only 40%. This is likely due to the mass vaccination of the general population in Israel which began in January 2021.
2. Data quality
The data quality was checked by comparing the status of filling out the questionnaire during the COVID-19 period (Wave 8), before it, and after: Was the questionnaire partially filled out or completely filled out? There was no significant difference in filling out the questionnaires.
Furthermore, we tested responses of the type 'unknown', 'refuses to answer', or an empty value. These are test cases for the dropping out of sampled persons. If a person dropped out, we would expect to receive a high rate of values of this type. However, we found no evidence of this.
In conclusion, the COVID-19 pandemic forced us to switch to alternative interviewing methods. Since then, we have discovered the advantages of telephone interviewing both to the interviewers and respondents, and it has been stayed as an important and convenient tool.
Dr Susanne Kohaut (IAB)
Dr Iris Möller (IAB) - Presenting Author
The IAB-Establishment Panel is the most comprehensive establishment survey in Germany with 15.000 firms participating every year. Until 2018 the interviews were conducted face-to-face with paper and pencil (PAPI) with the option of self-completion by leaving the paper questionnaire behind. In 2018 a computer aided instrument was introduced (CAWI/CAPI) in an experiment. In 2019 a substantial number of the refreshment sample was switch to CAWI. But we never used a mixed-mode design for the panel firms with a computer-aided instrument.
In 2020 during the first lock-down due to the pandemic we did not dare planning face-to-face interviews. The refreshment and panel sample were switched to self-administered or telephone interviews without face-to-face contacts. To avoid dramatic losses in responses we developed a comprehensive plan to contact the firms using a concurrent approach that offered multiple choices at the same time. The panel firms were informed in advance of the changes and contacted by letter with a link to the web questionnaire and a paper questionnaire for self-completion. All non-respondents were contacted by an interviewer over telephone after some time. A similar procedure was applied to the refreshment sample.
Nonetheless, the response rates of the refreshment and the panel sample dropped considerably in comparison to pre-pandemic-years. In this contribution we will analyse the development of the response rates over the last years and try to disentangle different reasons for the low response rates. We are especially interested in the consequences for the panel sample. In a first step we distinguish between non-contacts and refusals. We also try to find out whether firms that used the web questionnaire in the previous year might react differently to firms that where so far only used to
Dr Benjamin Domingue (Stanford Graduate School of Education)
Mr Ryan McCammon (University of Michigan)
Dr Brady West (University of Michigan)
Dr Kenneth Langa (University of Michigan)
Dr David Weir (University of Michigan)
Dr Jessica Faul (University of Michigan) - Presenting Author
As populations age, there is interest in assessing health conditions associated with age and longevity, such as age-related decline in cognitive functioning. As a result, there is an increased focus on measuring cognitive functioning in surveys of older populations. A move towards survey measurement via the web (as opposed to phone or in-person) is cost effective but challenging as it may induce bias in cognitive measures. Compounding this is that mode of survey administration is often not assigned randomly making inter-group comparison more difficult. We examine these issues using a novel experiment embedded within the Health and Retirement Study (HRS). The HRS, a US-based cohort of people over 50, has measured cognition since its inception using both in-person and telephone modes. We deploy techniques from item response theory (IRT) and differential item functioning (DIF) to estimate the difference in cognitive functioning between web and phone respondents in 2018 based on longitudinal cognition data collected prior to 2018. Second, we estimate the overall effect of taking the survey via the web as compared to the phone. Third, we examine item-level variation in the magnitude of the mode effect and suggest possible methods for adjustment to support longitudinal consistency. We find evidence of an increase in scores for HRS respondents who are randomly assigned to the web-based mode of data collection in 2018. Web-based respondents score higher in 2018 than do phone-based respondents, and they show much larger gains relative to 2016 performance and subsequently larger declines in 2020. The bias in favor of web-based responding is observed across all cognitive item types, but most pronounced for the serial 7 and items on financial literacy. Implications for both use of HRS data and future survey work on cognition are discussed.
Professor Heather Kitada Smalley (Willamette University) - Presenting Author
Professor Sarah Emerson (Oregon State University)
In this era of public opinion research where mixed-mode studies dominate the survey landscape, questions about the presence of mode effect have led to the development of methodology for mode adjustments. These proposed adjustments typically make parametric assumptions about model mode effect, namely design based additive/linear versus odds-multiplicative/logistic functional forms. It has been shown in our previous research that functional form choice is not trivial and may result in erroneous inference when using adjusted estimates, depending on the magnitude of the underlying trend or change in the reference response mode. Therefore, the goal of this research is to explore and develop methodology for hypothesis testing to aid survey researchers in choosing modeling techniques for mode effect adjustments based on the data. It has been shown that previously proposed methods of goodness-of-fit tests are not well calibrated for complex sampling schemes or violations of assumptions of independent, identically distributed data. In our proposed goodness-of-fit tests, we will address the construction of model residuals, creation of the test statistic, and the approximation of the reference distribution (empirical/bootstrap versus theoretic). We compare candidate models (linear versus logistic) for mode effect adjustment in longitudinal studies via two approaches, a head to head comparison and multiple separate comparisons, to address overall model fit. In the latter case, we will address the robustness of the procedure and provide insight into further steps that can be taken when each hypothesis is rejected.