ESRA logo

ESRA 2019 glance program


How to Deal with "Don't Know" and Other Non-Response Codes in Online and Mixed-Mode Surveys 2

Session Organiser Mr Tim Hanson (Kantar Public)
TimeWednesday 17th July, 16:30 - 17:30
Room D25

Researchers have long debated the treatment of “Don’t know” codes in surveys. Some have argued that a “Don’t know” response is often an expression of satisficing and that the explicit inclusion of this code may not improve data quality (e.g. Krosnick, 2000, 2002). Others have argued that an explicit “Don’t know” option is necessary as there are times when information asked of respondents is unknown; by not offering a clear “Don’t know” option there is a risk of collecting “non-attitudes” (Converse, 1976).

The treatment of “Don’t know” codes has increased in importance with the movement of surveys online, often as part of mixed-mode designs. In an interviewer-administered context the interviewer can code a “Don’t know” response where it is offered spontaneously. This approach cannot be replicated in a self-completion setting (including online surveys), meaning alternative approaches are required, which can impact on the way people respond. Therefore, developing best practice across modes for how to present “Don’t know” and other response options that have traditionally been coded only where spontaneously offered by respondents is becoming an increasing need within survey research.

A range of approaches have been used to deal with “Don’t know” and other non-response codes in online surveys. These include: (1) displaying these codes as part of the main response list so they are always available to respondents; (2) hiding these codes from the initial response list with instructions for respondents on how to select them (e.g. on a hidden ‘second screen’ that can be generated should none of the initial responses fit); and (3) removing these codes altogether from some or all survey questions. All three approaches have potential flaws in terms of comparability with other modes and risks of satisficing behaviours, reporting of non-attitudes and lower data quality. Currently there is no clear consensus among the survey research community over the best approach to take.

We welcome papers that have used different approaches for dealing with “Don’t know” and other non-response codes for online and mixed-mode surveys. Papers that include quantitative experiments or user testing to compare different treatment of these codes are particularly encouraged.

Keywords: Item non-response, Don't know codes, Online, Mixed-mode

Do We Know What to Do With “Don’t Know”?

Ms Alice McGee (Kantar Public) - Presenting Author
Mr Tim Hanson (Kantar Public)
Mr Luke Taylor (Kantar Public)

Download presentation

This paper presents results from an experiment conducted on the UK’s Understanding Society Innovation Panel (IP11) that compared different treatment of ‘Don’t know’ (DK) response codes within a self-completion (CAWI and CASI) questionnaire.

Much evidence exists on the treatment of DK response options in questionnaires, including arguments around whether they should be explicit options. Comparability between modes is a key concern; with surveys increasingly moving to mixed-mode designs, how to best deal with DK and other ‘spontaneous’ codes becomes a bigger issue. In interviewer surveys, DK options are typically unprompted but an interviewer can code a spontaneous DK response. This approach cannot be replicated in self-completion, meaning alternative approaches are required. This is likely to impact on how people respond.

The current approach for self-completion questions on Understanding Society is to ‘hide’ DK codes and only make them available if respondents try to move on without selecting an answer. Usability testing has uncovered issues with this approach, with respondents often unaware how to select a DK response and feeling forced to select an alternative. This poses questions over whether the current approach risks producing inaccurate data.

Our experiment compared the current approach with two alternatives:

• As above but with a specific prompt at each question on how to view additional options
• Including DK codes as part of the main response lists (so they are always visible)

Our analysis compares responses across the three formats, including:

• Levels of DK response
• Overall response distributions
• Question completion times
• Responses to contextual follow-up questions
• Differences by mode

The results will help inform survey practitioners about the effects of different treatment of DK codes on how people respond to questions in online, CASI and mixed-mode surveys. This in turn can help shape future approaches.


Implementation of ‘Do Not Know’ and ‘Prefer Not to Answer’ in Mobile Surveys

Dr James Thom (Ipsos MORI) - Presenting Author
Miss Lucy Lindley (Ipsos MORI)
Dr Patten Smith (Ipsos MORI)
Ms Sam Clemens (Ipsos MORI)

A key consideration in designing an online survey is whether and how to include explicit non-response options, such as “Do not know” and “Prefer not to answer”. Presentation of non-response options demands particular attention in surveys designed for completion on mobile devices, because of the limited availability of screen space on these devices. In this paper, we consider three approaches to implementation of item non-response in mobile surveys: (1) Offering no explicit non-response options, (2) Offering explicit non-response options ‘up-front’, (3) Offering explicit ‘reactive’ non-response options, which are only made available after the respondent attempts to proceed without answering the question. In each approach, respondents will see a polite probe requesting that they answer when they first attempt to skip any given question, but afterwards they can proceed without giving an answer to that question. We describe an experiment carried out with UK members of the Ipsos Interactive Services panel, comparing the effect of these approaches on non-response rates, data quality, and respondent experience in mobile surveys.


The Presentation of Don't Know Answer Options in Web Surveys: An Experiment with the NatCen Panel

Mr Bernard Steen (NatCen Social Research)
Mr Curtis Jessop (NatCen Social Research)
Ms Marta Mezzanzanica (NatCen Social Research) - Presenting Author
Ms Ruxandra Comanaru (NatCen Social Research)

While it is possible to collect ‘spontaneous’ answers of ‘Don’t Know’ (DK) in interviewer-administered surveys, this is not so easily the case in self-completion questionnaires. An important decision in web questionnaire design is therefore whether and how to include a DK option. Including a DK option may increase the amount of ‘missing’ data and lead to satisficing, but not including a DK option may negatively affect the data as respondents who genuinely do not know are forced to give a false answer or perhaps exit the survey entirely.

Several approaches have been suggested for web surveys. These include (1) offering a DK option up-front with visual separation from substantive answer options, (2) offering a DK option up-front but probing for more information after a DK answer, and (3) only showing a DK option if a respondent tries to skip a question without giving an answer.

This study tests these approaches by collecting experimental data using the NatCen panel, a probability-based sequential mixed-mode panel. Additionally, the study tests the effects of more explicitly explaining the functionality of option (3) to respondents upfront.

The study looks at the effects of these approaches on the number of DK answers alongside measures of data quality. Further insight is gained into the cognitive processes that respondents went through by using follow-up closed and open probes at the end of the survey to understand why they answered the questions in the way they did.