ESRA 2013 Sessions

Considerations in Choosing Between Different Pretesting MethodsDr Timo Lenzner
Nowadays, survey researchers have the possibility to choose between a huge amount of different pretesting methods. On the one hand, there are qualitative methods such as focus groups, vignettes or card sorts, expert reviews and cognitive interviewing, which are often used at an early stage of scale development or testing (pre-field-techniques). On the other hand, researchers can draw on quantitative methods (so-called field-techniques) such as interviewer or respondent debriefing, behavior coding, response latency, split-ballot experiments and statistical modelling. Each of these methods has its own strengths and weaknesses in identifying question problems. Therefore, questionnaire designers normally use a combination of different methods to design and pretest a (new) questionnaire. This session invites papers that...
(1) exemplify which pretesting method might best be used during which phase of the questionnaire development process;
(2) highlight the relative effectiveness of different pretesting methods in comparison to each other;
(3) demonstrate how different quantitative and qualitative pretesting methods might best be used in combination (best-practice examples).


Do pretesting methods identify 'real' problems and help us develop 'better' questions? 1Ms Jo d'Ardenne
It is common practice for new or significantly modified survey questions to be subject to some form of pretesting before being fielded on the main survey. Pretesting includes a range of qualitative and quantitative methods, such as focus groups, cognitive interviewing, respondent debriefing, use of the Survey Quality Predictor program, piloting, behaviour coding and split ballot experiments (for example see Oksenberg et al, 1991; and Forsyth & Lessler, 1991). On large-scale surveys pretesting may involve several iterations, possibly involving a number of different pretesting methods. Esposito and Rothgeb (1997) proposed 'an idealized quality assessment program' involving the collection of data from interviewers, respondents, survey sponsors and in-field interactions between interviewers and respondents, to assess the performance of survey questions. However there is relatively little systematic evidence on whether implementing the changes suggested by prestesting methods actually detect 'real problems' and if they do, whether such changes help us to produce 'better' questions and more accurate survey estimates (for some examples see Presser & Blair, (1994); Willis et al (1999); Rothgeb et al (2001)).

We invite papers which present findings from studies that seek to demonstrate:
• whether different pretesting methods used to test the same set of questions come up with similar or different findings and the reasons for this;
• whether the same pretesting method used to test the same set of questions comes up with the same or different findings and the reasons for this;
• whether findings from different pretesting methods are replicated in the survey itself;
• the difference that pretesting makes to the validity and reliability of survey estimates or to other data quality indicators e.g. item non-response.


Do pretesting methods identify 'real' problems and help us develop 'better' questions? 2Ms Jo d'Ardenne
It is common practice for new or significantly modified survey questions to be subject to some form of pretesting before being fielded on the main survey. Pretesting includes a range of qualitative and quantitative methods, such as focus groups, cognitive interviewing, respondent debriefing, use of the Survey Quality Predictor program, piloting, behaviour coding and split ballot experiments (for example see Oksenberg et al, 1991; and Forsyth & Lessler, 1991). On large-scale surveys pretesting may involve several iterations, possibly involving a number of different pretesting methods. Esposito and Rothgeb (1997) proposed 'an idealized quality assessment program' involving the collection of data from interviewers, respondents, survey sponsors and in-field interactions between interviewers and respondents, to assess the performance of survey questions. However there is relatively little systematic evidence on whether implementing the changes suggested by prestesting methods actually detect 'real problems' and if they do, whether such changes help us to produce 'better' questions and more accurate survey estimates (for some examples see Presser & Blair, (1994); Willis et al (1999); Rothgeb et al (2001)).

We invite papers which present findings from studies that seek to demonstrate:
• whether different pretesting methods used to test the same set of questions come up with similar or different findings and the reasons for this;
• whether the same pretesting method used to test the same set of questions comes up with the same or different findings and the reasons for this;
• whether findings from different pretesting methods are replicated in the survey itself;
• the difference that pretesting makes to the validity and reliability of survey estimates or to other data quality indicators e.g. item non-response.


Open-ended Questions: Methodological Aspects, Use and Analysis 1Mrs Cornelia Zuell
Open-ended questions in surveys serve to look into respondents' understanding of ideas, issues, etc. The efforts to prepare, code and analyse data of open-ended questions in contrast to closed questions are considerable. Thus, open-ended questions are not nearly as popular as closed questions. However, the growing number of web surveys might offer the chance of investigating various aspects of open-ended questions.
While for closed survey questions much methodological research has been conducted, open-ended questions are, in terms of methodological research, rarely investigated. Recent research on open-ended questions investigates, e.g., mode effects or the length of answers as quality indicator for responses. Other research covers reasons for (non-)response. The quality of answers to open-ended questions is one source of survey error that, if based on factors other than randomness, will result in biased answers and put the validity of the data into question - often disregarded in substantive analyses.
The proposed session aims to help filling that gap. We welcome papers on open-ended questions referring to
a. A comparison of software for textual data analysis,
b. The use of open-ended questions,
c. Analyses techniques,
d. Typology of open-ended questions,
e. Mode effects,
f. Design effects, e.g., question order or position in a questionnaire,
g. Effects of response or non-response,
h. Bias analyses, or
i. Any other topic that addresses quality or assesses the value of open-ended answers.
We also welcome papers that investigate other methodological aspects, e.g., a comparison of response behaviour to open-ended questions in general population surveys vs. in special sample surveys; a comparison of response behaviour to open-ended vs. closed questions for the same topic; or investigation of cross-cultural differences in response behaviour to open-ended questions.


Open-ended Questions: Methodological Aspects, Use and Analysis 2Mrs Cornelia Zuell
Open-ended questions in surveys serve to look into respondents' understanding of ideas, issues, etc. The efforts to prepare, code and analyse data of open-ended questions in contrast to closed questions are considerable. Thus, open-ended questions are not nearly as popular as closed questions. However, the growing number of web surveys might offer the chance of investigating various aspects of open-ended questions.
While for closed survey questions much methodological research has been conducted, open-ended questions are, in terms of methodological research, rarely investigated. Recent research on open-ended questions investigates, e.g., mode effects or the length of answers as quality indicator for responses. Other research covers reasons for (non-)response. The quality of answers to open-ended questions is one source of survey error that, if based on factors other than randomness, will result in biased answers and put the validity of the data into question - often disregarded in substantive analyses.
The proposed session aims to help filling that gap. We welcome papers on open-ended questions referring to
a. A comparison of software for textual data analysis,
b. The use of open-ended questions,
c. Analyses techniques,
d. Typology of open-ended questions,
e. Mode effects,
f. Design effects, e.g., question order or position in a questionnaire,
g. Effects of response or non-response,
h. Bias analyses, or
i. Any other topic that addresses quality or assesses the value of open-ended answers.
We also welcome papers that investigate other methodological aspects, e.g., a comparison of response behaviour to open-ended questions in general population surveys vs. in special sample surveys; a comparison of response behaviour to open-ended vs. closed questions for the same topic; or investigation of cross-cultural differences in response behaviour to open-ended questions.


Problems and perspectives of piloting and measuring attitudes in survey researchDr Tilo Beckers
85 years after Thurstone's seminal article social scientists continue to measure attitudes and those who do so are convinced that "Attitudes can be measured" (Thurstone 1928). But many practical
problems remain and new perspectives have been developed. This session addresses researchers working on (new) attitude measures. We are particularly interested in papers using specific research
designs (e.g. vignettes, randomized response, implicit association tests, cognitive interviewing, recall and projective methods), scaling properties and techniques (e.g. Guttman, Rasch) as well as techniques of analysis (e.g. CFA, LCA) to further develop and improve attitude measures or to test the reliability of survey attitudes (Roberts/Jowell 2008; Jowell et al. 2007; Alwin/Krosnick 2001; Sirken/Herrmann/Schaechter 1999; Krebs/Schmidt 1993). Empirical reports about projects on question testing and piloting of attitude measures are particularly welcome. This session is thus strongly interested in practical aspects of measurement in concrete research projects. Papers may address relevant substantive aspects but should have a strong focus on methodological innovations. The scope of the papers may be national, subnational as well as cross-national/cross-cultural.

Psychological short scales for survey research - advantages and potential limitationsProfessor Beatrice Rammstedt
Psychological variables are becoming more and more attractive for survey researchers who strive at explaining diverse social, economical, and political phenomena. In order to obtain good psychometric quality, the majority of psychological measures contain multiple items for assessing a single construct. However, using multiple items is often not feasible in large-scale social surveys due to limitations in time and money. In addition, the length of scales has been found to be associated with higher rates of attrition and of refusals (Stanton et al., 2002). Thus, researchers have begun to reduce the number of already established scales or to develop new and shorter scales for survey research. We invite researchers from different disciplines to share and discuss their experiences using short scales in surveys. The symposium is open to a wide range of topics in this field of research. For example, presentations may address the development and validation of short scales for specific constructs, different methodological approaches in meeting the challenges of short scales construction, comparisons of short and long scales.

Quality prediction and improvement of survey question using SQPDr willem saris
In 2012 the program SQP 2.0 has been released. The program contains a data base of more than 4000 survey questions of which the quality is known on the basis of MultiTrait MultiMethod experiments.
Besides that users can introduce their own questions in the system and get a prediction of the quality of these questions after coding the characteristics of their questions. These predictions are based on the relationship between the quality of the 4000 questions involved in MTMM experiments and the characteritics of these questions.
Several courses have been given to inform people about the possibilities and the use of the program. In these courses the emphasis was on the effect of measurement errors on substantive results of survey research and on the fact that one can correct for these errors if one knows the quality of the questions.
Now that so many people are familiar with this approach, we think that it would be interesting to have a meeting with researchers who have used the program to discuss with each other the advantages and disadvantages of the present version of the program. So we invite people who have applied this approach in their research to send in an abstract of their presentation. Not only methodological papers are of interest but also applications that show the differences in results with and without correction for measurement error.