ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Tuesday 16th July 2013, 14:00 - 15:30, Room: No. 20

Do pretesting methods identify 'real' problems and help us develop 'better' questions? 2

Convenor Ms Jo D'ardenne (NatCen Social Research)
Coordinator 1Sally Widdop (City University, London)

Session Details

It is common practice for new or significantly modified survey questions to be subject to some form of pretesting before being fielded on the main survey. Pretesting includes a range of qualitative and quantitative methods, such as focus groups, cognitive interviewing, respondent debriefing, use of the Survey Quality Predictor program, piloting, behaviour coding and split ballot experiments (for example see Oksenberg et al, 1991; and Forsyth & Lessler, 1991). On large-scale surveys pretesting may involve several iterations, possibly involving a number of different pretesting methods. Esposito and Rothgeb (1997) proposed 'an idealized quality assessment program' involving the collection of data from interviewers, respondents, survey sponsors and in-field interactions between interviewers and respondents, to assess the performance of survey questions. However there is relatively little systematic evidence on whether implementing the changes suggested by prestesting methods actually detect 'real problems' and if they do, whether such changes help us to produce 'better' questions and more accurate survey estimates (for some examples see Presser & Blair, (1994); Willis et al (1999); Rothgeb et al (2001)).

We invite papers which present findings from studies that seek to demonstrate:
• whether different pretesting methods used to test the same set of questions come up with similar or different findings and the reasons for this;
• whether the same pretesting method used to test the same set of questions comes up with the same or different findings and the reasons for this;
• whether findings from different pretesting methods are replicated in the survey itself;
• the difference that pretesting makes to the validity and reliability of survey estimates or to other data quality indicators e.g. item non-response.


Paper Details

1. Horses for courses: Why different question testing methods uncover different findings and implications for selecting methods

Ms Joanna D'ardenne (NatCen Social Research)

In recent decades there has been an increase in the number of Question Testing (QT) methodologies used to assess the quality of survey questions. Various QT methods are being routinely adopted by survey agencies including stakeholder focus groups, cognitive interviews, field-tests, experiments and the validation of data against external sources. Whilst some scholars have written about the different QT methods available (e.g. Madans et al, 2011) little advice is available on how effective each QT method is in different circumstances, what types of problem different QT methods find (or fail to find) or how best to combine the different methods.

This paper presents findings from a review of projects carried out by NatCen Social Research that used multiple QT methods to test the same questions. The aim of the review was to ascertain what types of problem are identified using each QT method, whether different QT methods ever uncover conflicting findings and, if so, why this occurs. Results indicate that different QT methods can result different, and sometimes contradictory, findings. The context of the testing may create results that are not replicated in other settings. Implications on how to select and combine different QT methods will be discussed in light of these findings.

Mandans, J., Miller, K., Maitland, A., & Willis, G (Eds), 2011, Question Evaluation Methods, John Wiley, New Jersey



2. From Concept to Question: Using Early-Stage Scoping Interviews to Develop Effective Survey Questions to Measure Innovation in Businesses

Mr Alfred Tuttle (United States Census Bureau)

What is innovation? Definitions vary, but precise specification of innovation is necessary to create standardized survey questions that are consistently understood as intended. This was the goal of research conducted by the US Census Bureau and the National Science Foundation in support of the Organization for Economic Cooperation and Development's plan to produce internationally comparable statistics on private sector innovation. To this end, we conducted early-stage scoping (ESS) interviews with business respondents to explore their perspectives on innovation, the findings of which will be used to inform the development of survey questions. This process is particularly important with concepts such as"innovation" which are abstract to respondents and have varied meanings. Frequently, the wording of establishment survey questions is taken directly from theoretical concepts, without input from respondents for whom the questions are intended. Consequently, in the pretesting phase, researchers may have to backtrack and rethink the design of questions to make them sensible to respondents. ESS interviews bridge the divide between survey concepts and respondents' perspectives by exploring respondents' native frames of reference. Understanding the disconnects between respondents and survey concepts allows researchers to identify critical points of ambiguity that must be sufficiently "unpacked" and broken down into components that may be reliably and unambiguously communicated. The ESS method also allows researchers to identify language familiar to respondents that correctly communicates the survey concepts. This presentation will discuss the goals of and procedures for conducting ESS interviews for business surveys using examples from the innovation survey project.



3. Web questionnaires in official population surveys: Do's and don'ts First experiments and impacts on ESSnet Project on Data Collection for Social Surveys using Multi Modes (DCSS)

Ms Karen Blanke (Federal Statistical Office Germany (Destatis))

Many social surveys are facing the pressure to introduce web based data collection as an additional mode, due to expectations on cost-savings, improving data quality and raising response-rates. Consequently, the European Statistical System may increasingly use multiple data collection modes, with CAWI as a new, additional mode. To prepare for the methodological challenges Eurostat has launched a project (ESSnet) on "Data Collection for Social Surveys using multiple modes" (DCSS). The project started in autumn 2012 and covers two major topics: (a) Design of CAWI instruments and (b) Multi-mode data collection design. Five partners** are involved; Destatis is acting as project co-ordinator.
Involved in the project Destatis is conducting pretests on the design of an adequate CAWI instrument for complex, offical population surveys. As first approach a rather small household survey on annual panel data will be tested during spring 2013. To begin with simple It-related functionalities will be tested (different browsers, screens), before two waves of qualitative pretesting either in the laboratory or by visiting household-sites are scheduled. Three methods of testing (observation, cognitive interviewing and eye-tracking) will be applied. Besides the general usability a special focus is testing the navigation (navigation tree vs. forward and backward-buttons, error checks (type/amount), style of error messages as well as kind and placement of instructions.
The presentation will give an overview on the findings of the pretest and the impact on the ESSnet (DCSS). Particular emphasis are taken on the lessons learned by respondents



4. Do pretesting methods identify 'real' problems and help us develop 'better' questions?

Miss Sara Grant-vest (Ipsos MORI)
Miss Ruth Lightfoot (Ipsos MORI)
Dr Hayk Gyuzalyan (Ipsos MORI)

There is a wealth of literature on the role of pre-testing in questionnaire design. However, what is often overlooked is the importance of pre-testing the overall fieldwork approach, particularly in cross-national research. Piloting is one of the last stages in the design process, its purpose should be to finalise the questionnaire design, ensure contact procedures work, that respondents are comfortable answering the questions in the 'real' survey environment, and that interviewers feel confident and safe conducting fieldwork.

This paper makes a strong case for focusing on the wider interview experience during piloting. It examines the approach taken to pre-testing for the Women's Well-being and Safety in Europe Survey (a 28 country study). It will focus not only on the contribution of pilot interviews and interviewer debriefs to improving questionnaire design, but also on the role piloting can have in highlighting challenges interviewers may face during fieldwork and how a thorough understanding of these issues enhance the training given to interviewers. These challenges can include introducing the survey, securing participation with randomly selected respondents, and dealing with interruptions.

The paper will argue that best practice in piloting should aim to refine all aspects of the survey approach, particularly in cross-cultural surveys where the challenges vary across countries. This helps ensure interviewers are trained to conduct the surveys consistently, whilst being flexible and sensitive to local cultural norms. It will argue that neglecting the wider aspects can lead to higher non-response and poor data