ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Wednesday 17th July 2013, 11:00 - 12:30, Room: No. 20

Open-ended Questions: Methodological Aspects, Use and Analysis 2

Convenor Mrs Cornelia Zuell (GESIS)
Coordinator 1Dr Evi Scholz (GESIS)
Coordinator 2Professor Matthias Schonlau (University of Waterloo)

Session Details

Open-ended questions in surveys serve to look into respondents' understanding of ideas, issues, etc. The efforts to prepare, code and analyse data of open-ended questions in contrast to closed questions are considerable. Thus, open-ended questions are not nearly as popular as closed questions. However, the growing number of web surveys might offer the chance of investigating various aspects of open-ended questions.
While for closed survey questions much methodological research has been conducted, open-ended questions are, in terms of methodological research, rarely investigated. Recent research on open-ended questions investigates, e.g., mode effects or the length of answers as quality indicator for responses. Other research covers reasons for (non-)response. The quality of answers to open-ended questions is one source of survey error that, if based on factors other than randomness, will result in biased answers and put the validity of the data into question - often disregarded in substantive analyses.
The proposed session aims to help filling that gap. We welcome papers on open-ended questions referring to
a. A comparison of software for textual data analysis,
b. The use of open-ended questions,
c. Analyses techniques,
d. Typology of open-ended questions,
e. Mode effects,
f. Design effects, e.g., question order or position in a questionnaire,
g. Effects of response or non-response,
h. Bias analyses, or
i. Any other topic that addresses quality or assesses the value of open-ended answers.
We also welcome papers that investigate other methodological aspects, e.g., a comparison of response behaviour to open-ended questions in general population surveys vs. in special sample surveys; a comparison of response behaviour to open-ended vs. closed questions for the same topic; or investigation of cross-cultural differences in response behaviour to open-ended questions.


Paper Details

1. Analyzing the correlational structure of value-systems by means of open ended questions

Mr Florian M. Bader (Zeppelin University)

Open-ended questions are an important tool in the toolbox of survey methodology. They are especially used when the researcher does not know much or wants to learn more about possible answers to a research question. Consequently, most analyses with data from open-ended questions focus on the description of frequency distributions.

Respondents were asked an open-ended question concerning personal values, in which they had to report a previously not defined number of important personal values. Using this data I will show that answers to open-ended questions could be used to test empirical implications of established theories considering the correlational structure of value-systems.

Beside the correlations, defined as an accumulation of co-occurrences of two values in the respondent's answers, I will also show that even more information could be gained by analyzing the sequence of the values mentioned, in each response.

The open-ended question is included in the <<Ethik-Monitor 2006>>, a study focusing on personal values and attitudes towards society, government, and political actors (Method: 1000 computer assisted personal interviews representing the German electorate). In addition to the open-ended question, a set of closed question formats concerning personal values is also included, which will be used for comparison.

By examining the answering process as well as pointing to the substantial benefits of open-ended questions, this paper contributes to a broader understanding of the answers to open-ended questions.



2. Analyzing open-ended questions by means of text analysis procedures

Dr Roel Popping (University of groningen)

Assume one has open-ended questions in a survey and seriously wants to analyze the answers to these questions. Now text analysis might be applied. This talk discusses a number of choices to be made when a thematic text analysis is to be applied. It starts with a classification of types of open-ended questions and requirements to be posed to each of these types. Here the coding comes immediately into view. Coding can be performed from an instrumental or a representational perspective. In the first the coding is performed from the point of view of the investigator, it can be performed in a run of a computer program. In the second the point of view of the respondent is acknowledged. Now the computer can be used as a management tool, but the coding itself must be performed by a human coder. The choice for one of these methods depends on what the investigator is looking for and has consequences for the way how to proceed. When the representational way of coding is applied also questions about interrater reliability must be posed.


3. Semi-automatic coding of open-ended questions

Professor Matthias Schonlau (University of Waterloo)

Web surveys lend themselves to open-ended questions more strongly than other survey modes. Respondents' answers tend to be longer and no transcription of oral responses is needed. However, text data from open-ended questions in surveys are difficult to analyze. Therefore, they are frequently ignored. Yet open-ended questions are important because they do not constrain respondents' answer choices. Where open-ended questions are unavoidable, sometimes multiple human coders hand-code answers into one of several categories and an interrater reliability is computed. At the same time, computer scientists have made impressive advances in text mining that may allow automation of such coding. Our preliminary work suggests automated algorithms do not achieve an overall reliability high enough to entirely replace humans. However, text mining algorithms are also able to distinguish between text answers that are reliably coded and those where there is considerable uncertainty.
We categorize of open-ended questions using text mining algorithms for easy-to-categorize answers and humans for the remainder. This is illustrated with an open-ended question in which respondents gave advice in a hypothetical situation at a doctor's office.