ESRA logo

ESRA 2023 Glance Program


All time references are in CEST

Item Nonresponse and Unit Nonresponse in Panel Studies 3

Session Organisers Dr Uta Landrock (LIfBi – Leibniz Institute for Educational Trajectories)
Dr Ariane Würbach (LIfBi – Leibniz Institute for Educational Trajectories)
Mr Michael Bergrab (LIfBi – Leibniz Institute for Educational Trajectories)
TimeFriday 21 July, 09:00 - 10:30
Room U6-20

Panel studies face various challenges, starting with establishing a panel, ensuring panel stability, minimizing sample selectivity and achieving high data quality. All these challenges are compromised by issues of nonresponse. Unit nonresponse may lead to small sample sizes (particularly if it occurs in the initial wave) as well as to panel attrition – for example if (recurrent) non-respondents are excluded from the sample due to administrative reasons. Item nonresponse implies reduced data quality, since it decreases the statistical power for analyses based on the variables of concern when respondents with missing information are excluded. It may, in extreme cases, lead to variables needing to be excluded from analyses due to their high proportion of missingness. Both, unit nonresponse and item nonresponse may introduce biases, either by increasing sample selectivity or by affecting the distribution of particular variables.
A societal crisis may increase these challenges in various ways. In the case of the COVID-19 pandemic, it may foster nonresponse for two reasons: It increases stress in the lives of our target populations and it limits the ability of panel studies to use interviewers for conducting personal interviews and for motivating respondents to participate.
We invite researchers to participate in this discussion, which may – among many others – include the following topics.
- Quantifying item nonresponse and unit nonresponse, including resulting selectivity.
- Measuring the development of item nonresponse and unit nonresponse over panel waves.
- Implications of item nonresponse and unit nonresponse on data quality.
- Strategies for reducing item nonresponse and unit nonresponse, e.g. by developing new question formats or response formats, introducing modified incentive schemes, offering different modes, or allowing mode or language switching.
- Problems related to such measures, e.g., comparability across panel waves.
- Handling item nonresponse and unit nonresponse, for example, by imputation of missing values or weighting.

Keywords: panel studies, item nonresponse, unit nonresponse

Papers

Variable Selection Methods for Binary Variables and Missing Data

Mr Michael Bergrab (LIfBi – Leibniz-Institut für Bildungsverläufe e.V.) - Presenting Author

Variable selection is a widely used method in various scientific fields. Considering the known trade-off
between bias and variance, variable selection is a critical thing: sometimes using theory if possible is
necessary to select among p variables, sometimes selecting candidates for the model by using statistical
approaches is more expedient. Most machine learning and statistical approaches distinguish between
selection and estimation. In a Bayesian view, the selection problem becomes an inherent estimation
problem.
To answer the question of how to deal with missing values, we use an extended Bayesian approach
to impute the missing values, while selecting and estimating them in a routine process. We will focus
on the German Educational Panel Study NEPS to compare various techniques and show the pitfalls in
application. Methods like LASSO, Ridge regression, elastic net are cannot handle missing values directly,
so we apply multiple imputation and compare these results with those of our Bayesian routine.
Preliminary findings indicate a high comparability between the presented techniques albeit the vari-
ous computationally intensive approaches. In addition to theory-driven variable selection, the shown
approaches demonstrate that assumptions must always be made that guide and influence the selection
process.


Effects of Interviewer-Participant-Assignment on Unit Nonresponse in the Light of the COVID-19 Pandemic

Ms Theresa Müller (infas Institut für angewandte Sozialwissenschaft GmbH (Institute for Applied Social Sciences)) - Presenting Author

Panel studies in the face-to-face field have faced various difficulties and challenges since 2020 due to the COVID-19 pandemic. Especially the risk of panel attrition and unit nonresponse increased in the face of higher respondent burden and forced design adaptations because of limited possibilities for interviewer assignments.

This paper analyzes changes in the effects of interviewer-participant-assignments on panel attrition during the COVID-19 pandemic, using data from the National Educational Panel Study (NEPS), specifically its Starting Cohort Early Childhood (SC1).

NEPS is a longitudinal survey on educational processes and competence development in Germany with several sub studies. The Starting Cohort Early Childhood targeted infants and their parents in the first wave in 2012. These families have been interviewed annually in their household surroundings until the pandemic struck in 2020. The annual interview with the parents was accompanied by competency tests with the children.

Due to the pandemic the survey underwent several design adaptations in the years 2020 and 2021. The most significant change was dividing the interview into two parts: (1) the interview with the parents, which has since been administered via telephone and (2) the competency tests with the children. In 2020 the competency tests were administered online whereas in 2021 the competency tests were administered again as face-to-face-interviews. In both waves, participants could refuse their participation in the second part which led to a much higher rate of nonresponse in the competency tests.

This paper investigates whether positive effects of interviewer-participant-assignment found in previous waves were enhanced during the pandemic or diminished. Furthermore, it analyzes whether same-interviewer-assignment in follow-up waves reduced unit nonresponse of the additional competency interview. Participation effects are analyzed by multivariate analyses under control of characteristics of both participants and interviewers.


MEASURING THE DEVELOPMENT OF UNIT NONRESPONSE IN A UNIVERSITY STUDENT PANEL SURVEY

Dr Sarah Berndt (Otto-von-Guericke-Universität Magdeburg)
Dr Annika Felix (Otto-von-Guericke-Universität Magdeburg)
Professor Philipp Pohlenz (Otto von Guericke Universitaet) - Presenting Author

The increasing number of student surveys at German universities is accompanied by a decline in the response rate and so-called "survey fatigue", especially in the case of online-based surveys. The resulting loss of participants (unit nonresponse) becomes a source of error in survey results if the losses are not distributed randomly but systematically, resulting in a bias (nonresponse bias). In panel surveys, the possibility of unit nonresponse is given for each wave, so that data bias can result from initial wave nonresponse (i.e., nonresponse in the first wave of the survey) and panel mortality (i.e., nonresponse in subsequent waves). An analysis of the nonresponse mechanism should therefore be an integral part of any empirical research process. This article addresses this issue by using the student panel of the University Magdeburg (Germany) to investigate whether and which person- and survey-related factors cause unit nonresponse in student surveys in order to derive possible courses of action from the findings. It thus closes the existing research gap that paradata in the sense of survey-related factors (e.g., incentives, number of reminders) have so far been comparatively rarely included in analyses of willingness to participate, especially in university panel studies. Two procedures are used to investigate systematic dropouts. First, in the form of a comparison of socio-demographic characteristics of participants and non-participants, the systematic dropout at the beginning of the panel survey is considered (initial wave nonresponse), with recourse to university statistical data. Thereafter, the focus is on the attrition between the survey waves (panel attrition). The findings indicate biases in some sociodemographic (e.g., gender), study-related (e.g., average grade of university entrance qualification), personality-related (e.g., extraversion), and survey-related variables (e.g., number of recollections) of the student panel.


Does the placement of a consent question affect participation and consent?

Dr Claudia Schmiedeberg (LMU Munich) - Presenting Author
Dr Christiane Bozoyan (LMU Munich)
Dr Jette Schröder (GESIS)
Professor Katrin Auspurg (LMU Munich)

In panel studies, data protection laws often require to explicitly ask for respondents’ panel consent during a recruitment survey in order to contact them again for future panel waves. If consent is not provided, respondents must be excluded from the sample in future waves. Panel consent is therefore crucial for panel stability.
There is more than one way to obtain panel consent. A common approach is first asking for consent to participate in the recruitment survey. Participating respondents are then informed at the end of the interview that the study includes one or more follow-up surveys and asked for consent to be re-contacted for this purpose. An alternative approach is informing respondents at the outset that the study consists of several waves and seek their consent to both participation in the recruitment survey and being approached for the second wave so that no extra consent question is needed at the end of the interview.
But which of these approaches leads to higher participation and consent rates? On the one hand, respondents may fear future commitment and be reluctant to consent to being contacted for future surveys without knowing the content of the first one. On the other hand, the consent question at the end of the interview might make the decision on future participation more salient, and respondents may be reluctant to consent to another wave after a tiering interview.
We experimentally varied the placement of the panel consent question in the first wave of a self-administered (push-to-web) two-wave panel study based on a register-based random sample of adults in two German federal states: Half of the respondents were asked for panel consent at the end of the interview, and the other half at the beginning. We analyze participation, break-off, and consent.


Approval of the Cooperation Continuum - Longitudinal Response Behavior in the NEPS Starting Cohort Adults

Dr Ariane Würbach (Leibniz Institute for Educational Trajectories (LIfBi)) - Presenting Author

Data from longitudinal surveys are often and to a nonnegligible extent prone to nonresponse and hence missing information. This work focusses on intentional missing values, opposed to missingness by design (e.g., filter instructions or rotating instruments). In longitudinal surveys, once a respondent declared panel consent different types of nonresponse might occur due to voluntariness. The respondent might opt to refuse answering to particular questions (item-nonresponse, INR) or refuses to participate in one or more survey cycles (wave-nonresponse or unit-nonresponse, UNR) and finally might dropout completely (final dropout, FD, or panel attrition). A well-documented phenomenon is that FD is typically preluded by UNR. Additionally, an accumulation of INR is often a precursor of UNR. The theory behind this sequence is straightforward: the severity of data loss becomes a strong indicator for the declining cooperativeness of the respondents. This cooperation continuum can also be found in the adult cohort of the German National Educational Panel Study (NEPS). To explore these response sequences we examine transitions at an aggregate level are examined as well as at an individual level. Preliminary results support the assumption of the cooperation continuum and emphasize the importance of keeping respondents in line, i.e., to maintain their compliance at any time. This includes efforts in contacting as well as diligence in questionnaire design.