ESRA logo
Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     




Friday 21st July, 11:00 - 12:30 Room: N AUD5


Reflecting on Failed Research

Chair Professor Martin Weichbold (University of Salzburg )
Coordinator 1Professor Wolfgang Aschauer (University of Salzburg)
Coordinator 2Professor Nina Baur (Technical University Berlin)

Session Details

At conferences we usually hear about remarkable outcomes in scientific research: Colleagues report how they achieved striking results, using sophisticated research designs and applying complex analytical tools. But everybody who has ever conducted research by him/herself knows that during the research process not necessarily everything runs smoothly. Of course, dealing with unforeseen difficulties is an essential part of empirical research, but sometimes we have to admit that certain decisions were wrong, a research strategy didn’t work out or maybe even the whole research project failed in the end.
This session wants to provide space to discuss research attempts that finally disappeared in a drawer. The aim of the session is not to make someone look like a fool or to satisfy the other’s curiosity, but to reflect on causes of failed research and to learn from mistakes. As failing is not always a matter of the researchers’ incompetence but can have multiple reasons, reporting what happened - and why – this session may be a step forward to prevent others from making the same mistakes.
In our proposal the term “failed” should be understood in a broad sense: reasons can reach from practical things (difficult access to the field, problems with funding, external incidents…), methodical problems (bad questionnaire, inappropriate survey period, sampling difficulties, problems with interviewers, etc. …) to methodological misconceptions (incoherent or too complex research design, incompatibilities of different parts of the study,…) or theoretical issues (difficulties in the implementation of theoretical concepts for empirical research,…).
We encourage researchers to present their reflections on failed research projects. The session should provide an open platform to discuss about difficulties in our daily research activities and to encourage a new code of practice – not to ignore failed research but to learn from it.


[3rd coorganizer Dr. Dimitri Prandner; male; dimitri.prandner@jku.at; University of Linz]

Paper Details

1. The trickiness of conducting interviews: when aggression turns up during interviews
Dr Heidi Siller (Medical University of Innsbruck/Women's Health Centre)
Professor Margarethe Hochleitner (Medical University of Innsbruck/Women's Health Centre)

Before conducting interviews thorough preparation and being acquainted with interviewing techniques are essential. However, how to prepare one (and others) for unexpected turns during interviews? When do we consider an interview as failure? What can we learn from such interviews and how can we incorporate these experiences in teaching others on interviewing techniques? Research has been concerned with pitfalls in interview conduction, e.g. interviewees having an agenda, thus giving narrations about something different than the interviewer asked; silence, mostly nonverbal communication and thereby less generated text for analysis.
In this paper pitfalls will be reflected which relate to emotional challenges during interviews. These pitfalls relate to interviewee aggression. Hereby one interview and one focus group setting from two different projects are used as examples. Reflection will be done using several steps to identify and generalise key insights. In the interview situation the interviewee turned aggressive and started to doubt the interviewer as well as the interview. In this case the relevance of power imbalance, age, gender and interviewer preparation are discussed. In the focus group situation the topic discussed referred to being a second generation migrant and what resources and benefits derive from that. First participants were excited because of planning a meeting with first, second or third generation migrants. During the focus group discussion participants became disillusioned and frustrated because of feeling rejected from society and saw focus group moderators as representatives of society.
Analysis and reflection on challenging interviews start with one’s attitude towards the interviewee and the interview situation, e.g. power imbalance, respecting boundaries as well as the influence of gender and age, preparation before interviewing others, but also the influence of discussion topics on the interviewee or focus group participants. Knowledge and practice how to contain interviewee’s (and interviewer’s) emotion are necessary. Aggression during interviews can happen. In some situations it is possible to work with this aggression and to reflect on it during the interview/focus group, thereby giving the possibility to express such feelings. In other situations it has to be considered to adjust the interview to be able to part in good terms. Analysing and reflecting on interview situations are an essential part in the research process. Challenging interview situations are good examples to teach students about pitfalls and how to cope with these.


2. First-mover disadvantage? Adapting a new measure from social identity research for the measurement of party identification
Dr Sabrina Jasmin Mayer (University of Duisburg-Essen)

Since its introduction in the 1950s, party identification has become one of the most used key concepts in election studies. In the original notion, PID denotes a long-standing, psychological affiliation with a political party (Campbell et al., 1960). It is usually measured with a single-item question that shows serious flaws as most standard questions focus on the affective dimension of PID and do not allow to measure multiple identifications (Weisberg, 1999). Several authors have already tried to introduce new measures for PID by adapting established instruments from social identity research (e.g. Greene 1999; Green et al. 2002; Bankert, Huddy, and Rosema, 2016). However, as these instruments consist of several items, they need considerably more survey time, especially when they are asked for all major parties to tap multiple identifications.
Currently, the single-item social identification (SISI) measure was introduced by Postmes et al. (2013) that is supposed to reliably measure in-group identifications. It was further validated by Reysen et al. (2013). Adapting SISI for the measurement of PID (SISI-PID) would allow the introduction of a valid, social psychologically founded measure that could track multiple identifications, but would not need as much survey time as the previous attempts that relied on larger scales.
The SISI-PID was first included in a German online survey (November 2013, N=1,000) that had quotas based on age and state, according to the German Microcensus. The first results were promising, about 68 percent of all respondents were classified as party adherents based on the standard question as well as the new measure. A moderate correlation was found between both measures (r=,45***) that makes sense from a theoretical point of view as the German standard question often taps mere party sympathizers.
Second, the SISI-PID was included in two waves of the GESIS panel (June 2015 and 2016, N=3620), a probability-based mixed-mode access panel that is representative for the German population. However, results for the share of party identifiers show considerable differences: Whereas about 81 percent of respondents are classified as party adherents based on the standard question, only 40 percent can be classified by the new measure (r=,26***). This difference for the share of party identifiers cannot be explained by changes in survey mode as the share for online participants is even lower than for offline participants (38 to 45 percent).
As PID is usually the strongest predictor for vote choice, it is pointless to propose a measure that lowers the share of adherents below 50 percent when the standard question finds much larger amounts of party adherents. In this paper, I aim to find reasons for the difference in the share of adherents between surveys. One possible reason could be that the results from the SISI-PID are easily affected by short-term factors such as election periods. As the results of the second wave will be release in December 2016, I will be able to tell if these difference are consistent.