ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Thursday 20th July, 11:00 - 12:30 Room: Q2 AUD1 CGD

Assessing the Quality of Survey Data 4

Chair Professor Jörg Blasius (University of Bonn )

Session Details

This session will provide a series of original investigations on data quality in both national and international contexts. The starting premise is that all survey data contain a mixture of substantive and methodologically-induced variation. Most current work focuses primarily on random measurement error, which is usually treated as normally distributed. However, there are a large number of different kinds of systematic measurement errors, or more precisely, there are many different sources of methodologically-induced variation and all of them may have a strong influence on the “substantive” solutions. To the sources of methodologically-induced variation belong response sets and response styles, misunderstandings of questions, translation and coding errors, uneven standards between the research institutes involved in the data collection (especially in cross-national research), item- and unit non-response, as well as faked interviews. We will consider data as of high quality in case the methodologically-induced variation is low, i.e. the differences in responses can be interpreted based on theoretical assumptions in the given area of research. The aim of the session is to discuss different sources of methodologically-induced variation in survey research, how to detect them and the effects they have on the substantive findings.

Paper Details

1. Same Question, Different Answer: Discrepancies in Spousal Reports on the Division of Labor
Mrs Miriam Truebner (Department of Sociology, University of Bonn)

The social sciences offer extensive literature on processes among dyads, couples, or parents and their children. In recent years, it has become common to perform analyses based on reports of both social actors, especially when individually differing variables are used, e.g. affection or relationship satisfaction. However, discrepancies in reports are still found even for information not expected to involve any difference between members of the dyad. As far as the division of labor is concerned, reports of women or men often differ from that of their partner. Mechanisms explaining the reporting bias on the division of labor are analyzed by applying generalized regression models.

2. Explaining the decline in subjective well-being over time in panel data
Dr Katia Iglesias (University of Neuchâtel)
Mrs Pascale Gazareth (University of Neuchâtel)
Professor Christian Suter (University of Neuchâtel)

Traditional income-based economic welfare indicators do not seem to be satisfactory anymore to measure happiness. Nowadays, the use of subjective measures of well-being is often suggested as an alternative to the use of traditional indicators. Individuals’ self-reports about their lives have been indeed increasingly considered to be relevant both to assess quality of life and to inform policy decisions.
Subjective well-being (SWB) indicators are useful to compare different situations over time or different places on the individual and the societal levels. This explains why they are increasingly integrated into surveys allowing cross-country comparisons and comparisons over time (like the Swiss Household Panel (SHP) does).
By using SHP data and by measuring SWB through a global question, we found a significant decline in SWB between 2000 and 2015. The aim of this contribution is to examine to what extent the decline in SWB is a result which shows a transition to lower levels of SWB or which is caused by some specific methodological artefacts. We identified more precisely four possible methodological issues: non-random attrition (NRA), panel conditioning (PC), sample refreshment and aging of participants.
Because of its structure, SHP data are particularly appropriate to challenge these issues, with a special attention to panel conditioning on several measures of SWB (i.e. global question versus questions by life domains). SHP has been administered annually since 1999. A first sample was randomly selected in 1999, a second sample in 2004 and a third sample in 2013.
First, we found that attrition was selective in the predictors of SWB all along the waves and that the respondents leaving the panel were more frequently represented in modalities of predictors associated with lower SWB. Second, panel conditioning was found to affect SWB measure in the first five waves for the global question and no specific patterns for questions by life domains were found. Third, we found higher SWB mean score in new samples than in old ones. And fourth, we found that aging modified the characteristics of the sample – for example an increase of inactive persons or a decrease of persons with a low education affected the levels of SWB. Thus, SWB and its determinants were affected by NRA, PC, refreshment and aging. Moreover, it has to be noted that it was difficult or impossible to distinguish these methodological issues from one another – aging from PC or refreshment from PC for example –, as well as to propose methodological “remedies” to them.
Finally, it resulted from our research that once these methodological issues have been controlled, SWB did not decline anymore over the last fifteen years in Switzerland.

3. Move over Cronbach alpha – Here comes Culturometrics Q-Correlation!
Professor Beatrice Boufoy-Bastick (The University of the West Indies)

Background: It’s not for nothing that Lee Cronbach’s C-alpha is perhaps the most cited statistic in the world – even though many researchers don’t always heed his limiting provisions. It conveniently summarises the consistency of the whole dataset. However, it is very sensitive to common types of methodologically-induced variation, for example what we coin as the ‘happiness vs. bad hair-day’ effect. If half of the respondents are happy so increase their scores by one but the other half are having a ‘bad hair day’ so decrease their responses one, this will greatly increase C-alpha. C-alpha is also sensitive to what we coin here as the ‘almost brain-dead response set’. This is where the effort from the respondent falls way below the cognitive-load demanded by the questionnaire, resulting in unconsidered repetitive responses. The effect is to send C-alpha through the roof. However, a particularly obnoxious methodologically-induced variation that particularly impacts our cross-cultural work is an effect we term ‘Cultural insult’. This results when the survey questions and responses are filtered through the different and consistent cultural values of multi-cultural conscientious high-effort respondents. The resulting response variation reduces C-alpha and is insultingly interpreted as lack of reliability. However, if those same questions are given again to those same high quality respondents the results will be highly consistent with a matching and misleading low C-alpha. It was this multi-cultural observation of cultural insult that inspired the creation of Culturometrics Q-Correlation.
Method: Q-Correlation (Q-corr) is like an extended lie scale. It selects high effort conscientious respondents. The most important questions on the survey are abstracted, paraphrased, randomised and reinserted at the end of the survey or questionnaire – where there is no access to the original questions. For each respondent the answers to the first set of questions are correlated with the answers to the second set of questions. This is the Q-Correlation. Higher Q-Corr indicates more consistent responding. The respondents are sorted from high consistent to low consistent respondents. The low effort principled responders will then be apparent. These are removed with successive filters for replicated responses. What is left is the ranked order of consistent high effort conscientious respondents. At this stage it is possible to choose Q-Corr cut-points to ensure the minimum required Q-corr or select a dataset with any desired level of genuine C-alpha and use other Q-filter quality selection filters.
Q-correlation is routinely used in Culturometric projects. It is demonstrated here with data from a Culturometric ‘Fear of Crime’ household survey in Trinidad (N=348 households). The households formed a random sample, stratified by population density and ethnicity from 10 major Trinidadian constituencies across the island. The interviews were conducted by 33 trained interviewers who read the questions, associated instructions and explanations to respondents.
The Q-Correlation facility is easily added to any survey. WRONG COUNT MORE TO BE ADDED

4. Testing Measurements of Environmental Concern: Does a simple question outperform multi-item scales?
Professor Axel Franzen (University of Bern)
Mr Sebastian Mader (University of Bern)

International surveys like the European Values Study (EVS), the World Values Survey (WVS), and the International Social Survey Programme (ISSP) have measured environmental concern since the beginning of the 1990s. However, the measures employed in these international survey programs lack comparability due to differences in item composition, item wording, and answering categories. This severely complicates international comparative research in environmental sociology. In order to overcome these shortcomings, we search for an easy single item measure of environmental concern that performs just as well as conventional multi-item scales. Such an approach has been successfully applied in the fields of happiness research and public health. In both fields a single item question such as “all in all, how happy are you with your life”, or “all in all, how healthy do you feel” are uniformly used and facilitate comparative and cumulative research. In our study, we suggest different single items to measure environmental concern and compare them to the multi-item scale used in the ISSP. We test both instruments with respect to test-retest reliability and construct validity using a multitrait-multimethod design. Furthermore, we also investigate the predictive validity of both instruments by analysing their relation to donation behaviour. Lastly, we evaluate their sensitivity towards social desirability.