ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Friday 21st July, 09:00 - 10:30 Room: Q2 AUD3


Benefits and Challenges of Open-ended Questions 1

Chair Dr Evi Scholz (GESIS )
Coordinator 1Mrs Cornelia Zuell (GESIS)

Session Details

Open-ended questions in surveys often support getting insights into respondents’ understanding of concepts, ideas, or issues. The efforts to prepare, code and analyse data of open-ended questions in contrast to closed questions are considerable. Thus, open-ended questions in general population surveys are not as popular as closed questions. While for closed survey questions much methodological research has been conducted, open-ended questions are, in terms of methodology, rarely covered. However, the increasing number of access panel web surveys offer the chance of more intensive use of open-ended survey questions and more investigation of related methodological aspects.
Recent research on open-ended questions examines, e.g., mode effects or the length of answers as quality indicator for responses. Other research deals with reasons for non-response. The quality of answers to open-ended questions is one source of survey error that, if based on factors other than randomness, will result in biased answers and put the validity of the data into question – often disregarded in substantive analyses and thus challenging its value.
The proposed session aims to help filling that gap. We welcome papers on open-ended questions referring to
a. Use of open-ended questions,
b. Typology of open-ended questions,
c. Mode effects,
d. Design and design effects, e.g., question order or position in a questionnaire,
e. Coding techniques and their challenges,
f. Response behaviour,
g. Effects of response and non-response,
h. Bias analyses,
i. Comparison of software for textual data analysis,
j. Analyses techniques,
k. Any other topic that addresses quality or assesses the value of open-ended questions and their answers.
We also welcome papers that investigate other methodological aspects, e.g., comparative aspects (general population surveys vs. special sample surveys; response behaviour regarding open-ended vs. closed questions for the same topic; or cross-cultural differences in response behaviour to open-ended questions).

Paper Details

1. Construct Equivalence of Left-right Scale Placement in a Cross-national Perspective
Mrs Cornelia Zuell (GESIS )
Dr Evi Scholz (GESIS)

Equivalence in survey design and implementation is one of the core issues in cross-national survey research. Equivalence is required at various stages of the design and implementation process and might for example relate to sampling or to survey mode or to the understanding of question texts and answer scales.
Construct equivalence deals with the theoretical validity of concepts measured by survey questions and item batteries. Construct equivalence is a pre-requisite for meaningful cross-national analyses and comparisons where respondents are socialized in different political, social and cultural contexts. Thus the same interpretation of concepts cannot be taken for granted.
Construct equivalence can be tested in several ways, for example by country-specific expert judgement, by focus groups, by cognitive interviews, by statistical tests of item batteries or by asking respondents using open-ended questions to identify associations with the respective terms in question.
The paper is about construct equivalence of the left-right scale in cross-national perspective. The left-right scale is a standard question used in many surveys to measure ideological orientation in a minimalist way. However, the theoretical concepts related to left and right might differ across countries. Variation in the understanding of left and the understanding of right is an issue for survey research if the variation is systematic with other variables and in different contexts. Systematically different understanding might result in incomparable self-placement on the left-right scale and thus challenge its validity.
While cognitive or focus group interviews are valuable sources to identify understanding problems, they might not be sufficient to find all problems due to the low number of interviewees. Using open-ended questions in a survey with several hundred of respondents offer additional options.
To test for construct equivalence and whether the left-right scale is understood in a similar way in cross-national context, we have asked about respondents’ individual associations with the terms left and right by using open-ended probe questions in an experimental online survey fielded in Canada, Denmark, Germany, Hungary, Spain, and the U.S. in 2011 with more than 3800 respondents in total. We have automatically coded open-ended answers using an extensive coding scheme covering more than 250 different aspects associated with left and right. We have tested whether the same empirical relations and ideological dimensions can be found across countries. Similarity in this respect is interpreted as evidence supporting the hypothesis of measurement equivalence.
In a first step of cross-national analyses we concentrate on the ranking of frequencies of individual answers and on the link between left-right self-placement and open-ended questions.
Results of this analysis show that respondents from different countries do not have the same ideas in mind when considering what left and what right mean for them. These results challenge a direct comparison of responses to the left-right scale across countries because responses have different meanings in different cultural contexts and conclusions based on such comparisons might be wrong.


2. The challenges of measuring informal care among children and young people
Dr Martina McKnight (Queen's University Belfast)
Dr Grace Kelly (Queen's University Belfast)
Dr Dirk Schubotz (Queen's University Belfast)


Young Life and Times (YLT) and Kids Life and Times (KLT) are annual cross-sectional postal surveys of 16-year olds and 10/11 year olds respectively, undertaken in Northern Ireland since 2003. Both surveys are run by ARK – a joint initiative between the two universities in Northern Ireland and widely used by government and voluntary sector organisations to monitor policy and young people’s attitudes on a wide range of issues.

This presentation uses YLT and KLT survey findings to discuss the methodological issues associated with identifying the true extent of caring among children and young people in public attitudes surveys. We focus on the subject of question wording to highlight some of the challenges involved and approaches taken to address this important issue.

Questions about the extent and nature of caring by children and young people were included in YLT in 2010 and KLT 2011. Responses indicated that a higher percentage of younger children compared to older adolescents identified themselves as young carers with caring responsibilities. Investigation of the open-ended responses describing the tasks carried out by KLT respondents who defined themselves as carers indicated that their understanding of what constituted ‘caring’ might not fall within the definition of a ‘young carer’ that the survey wished to capture. Therefore, it was decided that any future questions on caring would involve young carers in the survey design.

This was the case when caring questions were included in YLT and KLT in 2015. A group of young carers were consulted on how best to introduce the questions on caring and on the actual question wording so that what we meant by ‘young carers’ was more clearly understood both by 16 year olds and 10/11 year olds. The suggestions put forward by young carers were used to refine the questions to be included in the 2015 KLT and YLT surveys, and to formulate an introduction to the caring questions suitable for both age groups.

Despite consultation with the young carers group, responses indicated that lack of clarity on what constituted a young carer remained. This was reflected by an increased number of younger children than older adolescents identifying as a young carer in 2015, compared to the earlier surveys. This presentation reflects on these findings.


3. Evaluating mode effects in answers to sensitive open-ended questions
Mrs Rosa Sanchez Tome (University of Lausanne)
Professor Caroline Roberts (University of Lausanne)
Professor Dominique Joye (University of Lausanne)

There is a growing interest in factors that contribute to vulnerability across the life course, yet the concept of vulnerability is both difficult to define and to measure. A person is vulnerable when they are at risk of experiencing a source of stress (e.g., a major life event) while lacking resources to cope and recover. For this reason, open-ended questions work particularly well, offering a deep insight into those experiences that mark a person’s life course. Still their use in surveys poses methodological challenges, placing greater cognitive demands on respondents than most close-ended questions. This may lead some respondents to skip the question, or to give only cursory responses. Mode of data collection also has an impact on the reporting of sensitive information. For example, respondents often report less negative events when talking to an interviewer than when auto-completing the questionnaire. Meanwhile, self-administered modes require greater effort to respond carefully and honestly. Such effects could interact with respondent characteristics, to either exacerbate or attenuate differences in response across modes.

Our aim in this paper is to investigate the impact that mode effects have on the quality of responses given to sensitive open-ended questions. Using data from a mixed-mode experiment, we examine differences in the length and the detail of the respondents’ answers across web, paper and telephone modes. Moreover, we explore whether respondents that skip open-ended questions are significantly different from those that respond, and whether the content is substantively different –positive or negative– for the different modes. Preliminary results show that the telephone sample has a lower level of item nonresponse compared to the self-completion ones. In addition, telephone respondents give less complete answers that tend to be more positive than the respondents in other modes. We discuss these results in relation to respondents’ characteristics.


4. Open-ended questions - benefit or burden
Dr Marika Wenemark (PhD)

Declining response rates show clearly that peoples’ willingness to respond to surveys are declining. But does that also mean that peoples’ engagement when taking part in surveys are lower? One of the most common comments about a questionnaire addressing young people’s life and health was a wish for more open-ended questions to get opportunities to explain and reflect more deeply about the issues covered by the questionnaire.

Open-ended questions may shift some of the power from the researcher to the respondent and give the respondent better opportunities to give correct and truthful information. This may have positive effect on the relation between respondent and researcher. But it may also increase the response time and cognitive burden for respondents. The researcher is given qualitative information that may increase the value of the survey results but that will also require more effort to analyze.

This presentation will give examples of open-ended questions used in two different surveys answered by more than 30 000 young people and adults. What kind of open-ended questions was most successful and how did respondents react and respond to them? Did people respond differently to an identical open-ended question 2002 and 2016? Did open-ended questions work differently or give different answers in paper- or webformat?