ESRA logo

Tuesday 16th July       Wednesday 17th July       Thursday 18th July       Friday 19th July      

Download the conference book

Download the program





Wednesday 17th July 2013, 09:00 - 10:30, Room: Big hall

Assessing the Quality of Survey Data 3

Convenor Professor Jörg Blasius (University of Bonn)

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the "substantive" solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.


Paper Details

1. Statistical methods for detection of falsified interview in surveys: experience from election polls in Ukraine

Mr Eugen Bolshov (Kiyv International Institute of Sociology)
Miss Marina Schpiker (Kiyv International Institute of Sociology)

Response rate of face-to-face surveys is constantly decreasing. This burden makes work of interviewers harder, so temptation to falsify interviews becomes more widespread. Usual practice of control quality of data is checking random sampling of respondents by phone call or personal visit. Although these methods can prove or disprove the fact of the interview, they are ineffective for the detection of partial forgery: situations where only part of questions were asked, while the rest were filled by the interviewer. We're going to share the experience of statistical fraud detection that we have received during the election survey in Kiyv region in the spring of 2012. Most procedures that we have tested were mentioned in the literature, but as experimental approaches, rather than routine practice of research companies. We used the following methods to distinguish between potential falsifiers and regular interviewers: 1) analysis of variable variance in subset of interviewer's data 2) analysis of frequency of using options "hard to say", "other" by interviewer 3) analysis of unlikely combinations of answers and so on. So, we had set of statistical indicators for each interviewer that allowed us to divide them to regular and falsifiers. In our case, interviewers whose work was labeled as suspicious by statistical procedures had higher share of faked data after the field control. Usage of statistical methods can make control of interviewers work more focused, effective and less expensive.



2. Quality of cross-country data: Assessing through Interviewer Impressions (Case of the Caucasus Barometer survey)

Dr Tinatin Zurabishvili (CRRC-Georgia)
Dr Heghine Manasyan (CRRC-Armenia)

The Caucasus Barometer (CB) is a cross-country annual data collection project, initiated by the Eurasia Partnership Foundation Caucasus Research Resource Centers (CRRC). The CRRC teams of Armenia, Azerbaijan and Georgia have been collecting reliable data from households on the region since 2004.

One of the ways we try to understand better the level of cross-country comparability of data is to collect information on impressions of interviewers about the respondents. Meantime, to minimize interviewers' possible impact on CB survey we do take proper actions, including limiting the number of interviews conducted per interviewer [normally, between 30 and 40].

In our paper, we will be analyzing relationship between respondents' assessment by the interviewers, and respondents' answers to some of the most sensitive questions of the 2011 and 2012 Caucasus Barometer. Based on several variables from the Interviewer Assessment Form, an "attitude index" will be created measuring how the respondents have been classified by the interviewers. A special emphasis will be made on assessing the honesty level of respondents.

We are interested in revealing systematic patterns in respondents' answers and ways of adjustment of answers on main questions based on correlation between "attitude indexes," and share of cases when respondents try to avoid answers, or provide answers that are not sincere enough.

These findings will help us add a new dimension in CB data analysis, and we expect these finding to provide more insight in respect to interpretation of CB data.



3. The Validity of Scenario-Techniques in the Analysis of Prosocial Behaviors - Effects of Vignette Form and Social Desirability

Professor Stefanie Eifler (University of Halle)

Scenario-techniques have been frequently applied to the analysis of so-called sensitive topics. However, up to now it remained an open question whether vignettes lead to more valid measures of sensitive topics. A variety of research strategies have been employed in order to assess aspects of the valitiy of scenario-techniques. In this paper, various forms of prosocial behaviors are taken as an example. It is assumed that subjects tend to overreport the probability and frequency of prosocial behaviors in principle depending on their tendency to answer in a socially desirable way. The present study is based on Paulhus' concept of self- and other-deception. Abelson's Script Theory allows the assumption that the overreporting of prosocial behaviors not only depends on the subject's realisation of self- and other-deception but also on formal features of the vignettes. This assumption is analysed on the basis of an experimental study (2*3-design). Computer-assisted telephone interviews (CATI) of adult inhabitants of a German city aged 18 to 65 were carried out (n=648). Analyses of variance and regression analyses yielded the result that the probability and frequency of prosocial behaviors depended on formal features of the vignettes as well as on the tendency of subjects to answer in a socially desirable way. However, the patterns of these results varied with the forms of prosocial behaviors that were analysed within the framework of the present study. The results are discussed with regard to the underlying theoretical assumptions.


4. Construct validation of competency assessment through 360º questionnaires (informant views) and behavioral observation from critical incident interviews

Professor Joan Manuel Batista (ESADE)
Professor Richard Boyatzis (Case Western Reserve University)
Mr Ricard Serlavos (ESADE)
Ms Basak Canboy (ESADE)

The 360º questionnaire Emotional and Social Competencies Inventory - University Edition (ESCI-U) and critical incident interviews (CII) are used in a leadership development course to measure competencies from a behavioral perspective. This study tries to establish construct validity using these two instruments for measuring the same competencies. We are using the 360º questionnaires from 100 students who have also participated in voluntary interviews which have been coded by two trained coders with an inter-coder reliability of > 0,7.