ESRA 2013 Sessions

Is it Worth Mixing Modes?Dr Teresio Poggio


Is it worth mixing modes? New evidence on costs and survey error on mixed-modes surveys 1Dr Ana Villar


Is it worth mixing modes? New evidence on costs and survey error on mixed-modes surveys 2Dr Ana Villar
Survey designers face a continuous tension between minimizing survey error and keeping costs as low as possible (Groves, 1989). One of the strategies that have been pursued to reduce costs is the use of mixed modes of data collection, using cheaper modes early in the process, and reserving more efficient and expensive modes to increase response rates and coverage.

A considerable amount of research has tried to assess the impact of using mixed modes of data collection on data quality in terms of response error and measurement error. This body of research typically compares response distributions and response rates across modes but fail to report the effect of the mode design on actual comprehensive costs and on timeliness.

However, mixed mode survey implementations may not be as efficient as first thought. With the current technological tools, the costs associated with the production of equivalent questionnaires across modes, equivalent contact forms, equivalent data protocols, and other fieldwork documents might be an underestimated burden. Further to this, findings about the effects on response rates and measurement effects are far from conclusive, and the field is in need of new evidence linking total survey error and survey costs.

In this session we invite studies that address challenges and lessons learned from the implementation of mixed mode designs, with an emphasis on the link between survey error and costs. Papers submitted for this session will ideally include evidence of the effect of the use of mixed-modes on:
- costs, time, and other resources;
- coverage error;
- response rates and/or response bias;
- measurement error.


Is it worth mixing modes? New evidence on costs and survey error on mixed-modes surveys 3Dr Ana Villar
Survey designers face a continuous tension between minimizing survey error and keeping costs as low as possible (Groves, 1989). One of the strategies that have been pursued to reduce costs is the use of mixed modes of data collection, using cheaper modes early in the process, and reserving more efficient and expensive modes to increase response rates and coverage.

A considerable amount of research has tried to assess the impact of using mixed modes of data collection on data quality in terms of response error and measurement error. This body of research typically compares response distributions and response rates across modes but fail to report the effect of the mode design on actual comprehensive costs and on timeliness.

However, mixed mode survey implementations may not be as efficient as first thought. With the current technological tools, the costs associated with the production of equivalent questionnaires across modes, equivalent contact forms, equivalent data protocols, and other fieldwork documents might be an underestimated burden. Further to this, findings about the effects on response rates and measurement effects are far from conclusive, and the field is in need of new evidence linking total survey error and survey costs.

In this session we invite studies that address challenges and lessons learned from the implementation of mixed mode designs, with an emphasis on the link between survey error and costs. Papers submitted for this session will ideally include evidence of the effect of the use of mixed-modes on:
- costs, time, and other resources;
- coverage error;
- response rates and/or response bias;
- measurement error.


Mixed Mode or Mixed Device? Surveying in a new technological eraDr Mario Callegaro
Due to growing Internet coverage and increased emphasis on survey costs, web surveys have become an important part of the survey landscape. In the past years several handbooks have been published on designing effective web surveys. However, a new technological challenge is facing survey designers. Modern society has become more interactive and especially the younger generation is now geared to the potential of being online at will, be it through laptop, smart phone, or tablet. Web surveys are morphing from a computer-oriented into a multi-device concept.
How to design quality surveys for this new situation? In the past, attention has been paid to the optimal design of questionnaires for mixed-mode surveys, and we may learn from it. But, here a new situation is created. We do not have a mixed-mode in the traditional sense, where two discrete modes (e.g., self-administered visual mode vs. telephone aural mode). We have one overall data collection principle: a self-administered survey meant to be completed on web, tablet, or smart phone. This means that traditional question formats, such as grids, or long rating scales are no longer appropriate as they add to a device specific measurement error. Also, customs associated with the use of different devices (e.g., quick exchange of information through a tweet on a mobile device vs. more detailed information through facebook or e-mail) may influence length of questionnaire, break-offs and nonresponse.
This session invites presentations that investigate how different devices may be combined and influence different sources of survey errors. We particularly invite presentations that discuss how different survey errors can be reduced by optimal design of the questionnaires. Randomized experiments or quasi-experiments where the difference across devices due to self-selection is taken account in the statistical analysis, are welcomed.


Mixed Mode Surveys - Reports from the Field Work 1Mr Patrick Schmich


Mixed Mode Surveys - Reports from the Field Work 2Mr Patrick Schmich


Mode Effects in Mixed-Mode Surveys: Prevention, Diagnostics, and Adjustment 1Professor Edith de Leeuw
Mixed-mode surveys have become a necessity in many fields. Growing nonresponse in all survey modes is forcing researchers to use a combination of methods to reach an acceptable response. Coverage issues both in Internet and telephone surveys make it necessary to adopt a mixed-mode approach. Furthermore, in international and cross-cultural surveys, differential coverage patterns and survey traditions across countries make a mixed-mode design inevitable.

From a total survey error perspective a mixed-mode design is attractive, as it is offering reduced coverage error and nonresponse error at affordable costs. However, measurement error may be increased when using more than one mode. This could be caused by mode inherent effects (e.g., absence or presence of interviewers) or by question format effects, as often different questionnaires are used for different modes.

In the literature, two kinds of approaches can be distinguished, aimed at either reducing mode effects in the design of the study or adjusting for mode effects in the analysis phase. Both approaches are important and should complement each other. The aim is to bring researchers from both approaches together to exchange ideas and results.

This session invites presentations that investigate how different sources of survey errors interact and combine in mixed mode surveys. We particularly invite presentations that discuss how different survey errors can be reduced (prevented) or adjusted for (corrected). We encourage empirical studies based on mixed-mode experiments or pilots. We especially encourage papers that attempt to generalize results to overall recommendations and methods for mixed-mode surveys.



Note: Depending on the number of high quality paper proposals we could organize one or more sessions.
Note 2: We have four organizers, this does not fit the form. Fourth is Joop Hox Utrecht University, j.hox@uu.nl


Mode Effects in Mixed-Mode Surveys: Prevention, Diagnostics, and Adjustment 2Professor Edith de Leeuw
Mixed-mode surveys have become a necessity in many fields. Growing nonresponse in all survey modes is forcing researchers to use a combination of methods to reach an acceptable response. Coverage issues both in Internet and telephone surveys make it necessary to adopt a mixed-mode approach. Furthermore, in international and cross-cultural surveys, differential coverage patterns and survey traditions across countries make a mixed-mode design inevitable.

From a total survey error perspective a mixed-mode design is attractive, as it is offering reduced coverage error and nonresponse error at affordable costs. However, measurement error may be increased when using more than one mode. This could be caused by mode inherent effects (e.g., absence or presence of interviewers) or by question format effects, as often different questionnaires are used for different modes.

In the literature, two kinds of approaches can be distinguished, aimed at either reducing mode effects in the design of the study or adjusting for mode effects in the analysis phase. Both approaches are important and should complement each other. The aim is to bring researchers from both approaches together to exchange ideas and results.

This session invites presentations that investigate how different sources of survey errors interact and combine in mixed mode surveys. We particularly invite presentations that discuss how different survey errors can be reduced (prevented) or adjusted for (corrected). We encourage empirical studies based on mixed-mode experiments or pilots. We especially encourage papers that attempt to generalize results to overall recommendations and methods for mixed-mode surveys.



Note: Depending on the number of high quality paper proposals we could organize one or more sessions.
Note 2: We have four organizers, this does not fit the form. Fourth is Joop Hox Utrecht University, j.hox@uu.nl


Mode Effects in Mixed-Mode Surveys: Prevention, Diagnostics, and Adjustment 3Professor Edith de Leeuw
Mixed-mode surveys have become a necessity in many fields. Growing nonresponse in all survey modes is forcing researchers to use a combination of methods to reach an acceptable response. Coverage issues both in Internet and telephone surveys make it necessary to adopt a mixed-mode approach. Furthermore, in international and cross-cultural surveys, differential coverage patterns and survey traditions across countries make a mixed-mode design inevitable.

From a total survey error perspective a mixed-mode design is attractive, as it is offering reduced coverage error and nonresponse error at affordable costs. However, measurement error may be increased when using more than one mode. This could be caused by mode inherent effects (e.g., absence or presence of interviewers) or by question format effects, as often different questionnaires are used for different modes.

In the literature, two kinds of approaches can be distinguished, aimed at either reducing mode effects in the design of the study or adjusting for mode effects in the analysis phase. Both approaches are important and should complement each other. The aim is to bring researchers from both approaches together to exchange ideas and results.

This session invites presentations that investigate how different sources of survey errors interact and combine in mixed mode surveys. We particularly invite presentations that discuss how different survey errors can be reduced (prevented) or adjusted for (corrected). We encourage empirical studies based on mixed-mode experiments or pilots. We especially encourage papers that attempt to generalize results to overall recommendations and methods for mixed-mode surveys.



Note: Depending on the number of high quality paper proposals we could organize one or more sessions.
Note 2: We have four organizers, this does not fit the form. Fourth is Joop Hox Utrecht University, j.hox@uu.nl


Mode Effects in Mixed-Mode Surveys: Prevention, Diagnostics, and Adjustment 4Professor Edith de Leeuw
Mixed-mode surveys have become a necessity in many fields. Growing nonresponse in all survey modes is forcing researchers to use a combination of methods to reach an acceptable response. Coverage issues both in Internet and telephone surveys make it necessary to adopt a mixed-mode approach. Furthermore, in international and cross-cultural surveys, differential coverage patterns and survey traditions across countries make a mixed-mode design inevitable.

From a total survey error perspective a mixed-mode design is attractive, as it is offering reduced coverage error and nonresponse error at affordable costs. However, measurement error may be increased when using more than one mode. This could be caused by mode inherent effects (e.g., absence or presence of interviewers) or by question format effects, as often different questionnaires are used for different modes.

In the literature, two kinds of approaches can be distinguished, aimed at either reducing mode effects in the design of the study or adjusting for mode effects in the analysis phase. Both approaches are important and should complement each other. The aim is to bring researchers from both approaches together to exchange ideas and results.

This session invites presentations that investigate how different sources of survey errors interact and combine in mixed mode surveys. We particularly invite presentations that discuss how different survey errors can be reduced (prevented) or adjusted for (corrected). We encourage empirical studies based on mixed-mode experiments or pilots. We especially encourage papers that attempt to generalize results to overall recommendations and methods for mixed-mode surveys.



Note: Depending on the number of high quality paper proposals we could organize one or more sessions.
Note 2: We have four organizers, this does not fit the form. Fourth is Joop Hox Utrecht University, j.hox@uu.nl


Natural Experiments in Survey ResearchDr Henning Best
Experiments are generally regarded as the royal road to causal inference. Yet, social science research often cannot make use of research designs based on randomized laboratory experiments. This is, in part, due to the very nature of social inquiry, which generally is concerned with society. Consequently, critics blame the (alleged) low external validity of lab experiments in the social sciences. Natural experiments can help to reduce these problems as they are set in a real societal context, and external validity can be enhanced. They do, however, face serious problems as well: endogeneity, insufficiencies in standardizing treatment- and control conditions, and self-selection into study- and control group. Advances in data analysis have tackled these problems, and methods such as IV-regression, conditional fixed-effects models and propensity score matching help in identifying unbiased treatment effects.

In this session we are particularly interested in papers on identification of treatment effects in natural experiments, research combining surveys with natural-experimental designs, papers that employ multiple methods of treatment estimation, and innovative ways to design or analyze natural experiments in cross-sectional and especially panel surveys.


Social Desirability Bias in Sensitive Surveys: Theoretical Explanations and Data Collection Methods 1Dr Ivar Krumpal
Survey measures of sensitive characteristics (e.g. sexual behaviour, health indicators, illicit work, voting preferences, income, or unsocial opinions) based on respondents' self-reports are often distorted by social desirability bias. More specifically, surveys tend to overestimate socially desirable behaviours or opinions and underestimate socially undesirable ones, because respondents adjust their answers in accordance with perceived public norms. Furthermore, nonresponse has a negative impact on data quality, especially when the missing data is systematically related to key variables of the survey. Besides psychological aspects (such as a respondent's inclination to engage in impression management or self-deception), cumulative empirical evidence indicates that the use of specific data collection strategies influences the extent of social desirability bias in sensitive surveys. A better data quality can be achieved by choosing appropriate data collection methodologies.

This session has three main goals: (1) discuss the theoretical foundation of the research on social desirability bias in the context of a general theory of human psychology and social behaviour. For example, a clearer understanding of the social interactions between the actors that are involved in the data collection process (respondents, interviewers, and data collection institutions) could provide empirical researchers with a substantiated basis for optimizing the survey design to achieve high quality data; (2) present experimental results evaluating conventional methods of data collection for sensitive surveys (e.g. randomized response techniques and its variants) as well as innovative and new survey designs (e.g. mixed-mode surveys, item sum techniques). This also includes advancements in the methods for statistical analysis of data generated by these techniques; (3) discuss future perspectives for tackling the problem of social desirability and present possible alternative approaches for collecting sensitive data. This may include, for example, record linkage approaches, surveys without questions (e.g. biomarkers), and non-reactive measurement.

Social Desirability Bias in Sensitive Surveys: Theoretical Explanations and Data Collection Methods 2Dr Ivar Krumpal
Survey measures of sensitive characteristics (e.g. sexual behaviour, health indicators, illicit work, voting preferences, income, or unsocial opinions) based on respondents' self-reports are often distorted by social desirability bias. More specifically, surveys tend to overestimate socially desirable behaviours or opinions and underestimate socially undesirable ones, because respondents adjust their answers in accordance with perceived public norms. Furthermore, nonresponse has a negative impact on data quality, especially when the missing data is systematically related to key variables of the survey. Besides psychological aspects (such as a respondent's inclination to engage in impression management or self-deception), cumulative empirical evidence indicates that the use of specific data collection strategies influences the extent of social desirability bias in sensitive surveys. A better data quality can be achieved by choosing appropriate data collection methodologies.

This session has three main goals: (1) discuss the theoretical foundation of the research on social desirability bias in the context of a general theory of human psychology and social behaviour. For example, a clearer understanding of the social interactions between the actors that are involved in the data collection process (respondents, interviewers, and data collection institutions) could provide empirical researchers with a substantiated basis for optimizing the survey design to achieve high quality data; (2) present experimental results evaluating conventional methods of data collection for sensitive surveys (e.g. randomized response techniques and its variants) as well as innovative and new survey designs (e.g. mixed-mode surveys, item sum techniques). This also includes advancements in the methods for statistical analysis of data generated by these techniques; (3) discuss future perspectives for tackling the problem of social desirability and present possible alternative approaches for collecting sensitive data. This may include, for example, record linkage approaches, surveys without questions (e.g. biomarkers), and non-reactive measurement.

Social Desirability Bias in Sensitive Surveys: Theoretical Explanations and Data Collection Methods 3Dr Ivar Krumpal
Survey measures of sensitive characteristics (e.g. sexual behaviour, health indicators, illicit work, voting preferences, income, or unsocial opinions) based on respondents' self-reports are often distorted by social desirability bias. More specifically, surveys tend to overestimate socially desirable behaviours or opinions and underestimate socially undesirable ones, because respondents adjust their answers in accordance with perceived public norms. Furthermore, nonresponse has a negative impact on data quality, especially when the missing data is systematically related to key variables of the survey. Besides psychological aspects (such as a respondent's inclination to engage in impression management or self-deception), cumulative empirical evidence indicates that the use of specific data collection strategies influences the extent of social desirability bias in sensitive surveys. A better data quality can be achieved by choosing appropriate data collection methodologies.

This session has three main goals: (1) discuss the theoretical foundation of the research on social desirability bias in the context of a general theory of human psychology and social behaviour. For example, a clearer understanding of the social interactions between the actors that are involved in the data collection process (respondents, interviewers, and data collection institutions) could provide empirical researchers with a substantiated basis for optimizing the survey design to achieve high quality data; (2) present experimental results evaluating conventional methods of data collection for sensitive surveys (e.g. randomized response techniques and its variants) as well as innovative and new survey designs (e.g. mixed-mode surveys, item sum techniques). This also includes advancements in the methods for statistical analysis of data generated by these techniques; (3) discuss future perspectives for tackling the problem of social desirability and present possible alternative approaches for collecting sensitive data. This may include, for example, record linkage approaches, surveys without questions (e.g. biomarkers), and non-reactive measurement.

Social Desirability Bias in Sensitive Surveys: Theoretical Explanations and Data Collection Methods 4Dr Ivar Krumpal
Survey measures of sensitive characteristics (e.g. sexual behaviour, health indicators, illicit work, voting preferences, income, or unsocial opinions) based on respondents' self-reports are often distorted by social desirability bias. More specifically, surveys tend to overestimate socially desirable behaviours or opinions and underestimate socially undesirable ones, because respondents adjust their answers in accordance with perceived public norms. Furthermore, nonresponse has a negative impact on data quality, especially when the missing data is systematically related to key variables of the survey. Besides psychological aspects (such as a respondent's inclination to engage in impression management or self-deception), cumulative empirical evidence indicates that the use of specific data collection strategies influences the extent of social desirability bias in sensitive surveys. A better data quality can be achieved by choosing appropriate data collection methodologies.

This session has three main goals: (1) discuss the theoretical foundation of the research on social desirability bias in the context of a general theory of human psychology and social behaviour. For example, a clearer understanding of the social interactions between the actors that are involved in the data collection process (respondents, interviewers, and data collection institutions) could provide empirical researchers with a substantiated basis for optimizing the survey design to achieve high quality data; (2) present experimental results evaluating conventional methods of data collection for sensitive surveys (e.g. randomized response techniques and its variants) as well as innovative and new survey designs (e.g. mixed-mode surveys, item sum techniques). This also includes advancements in the methods for statistical analysis of data generated by these techniques; (3) discuss future perspectives for tackling the problem of social desirability and present possible alternative approaches for collecting sensitive data. This may include, for example, record linkage approaches, surveys without questions (e.g. biomarkers), and non-reactive measurement.