Assessing the Quality of Survey Data 2
|Convenor||Professor Joerg Blasius (University of Bonn )|
The Household Finance and Consumption Survey is an initiative to collect micro-data on household income and wealth among the Euro Area countries. The sampling designs used may involve components such as stratification, clustering, oversampling, weighting adjustments for unit non-response and calibration. In this presentation, we intend to measure the effect of these features on accuracy. We rely on the concept of design effect, which measures the gain or loss in sampling precision caused by using a complex design. The design effect factor will be decomposed to account for the specific effects of unequal weights, imputation and other components.
In a preceding study random route instructions were tested for their theoretical property to select respondents with equal probability. All routes failed to achieve this necessary assumption. This survey reproduces and extends the analysis to deviations in variables. Registration office data are applied to verify the negative impact of biased household selections on survey results. All tested random route instructions lead to biased expected values in multiple variables. The strongest errors were found in variables that were related to the spatial location of a household. The talk closes with a proposal how to improve random route samples.
Interviewers influence data quality in surveys unintentionally or intentionally. We analyse influences of interviewers’ characteristics and payment schemes on falsified and real data. The analysis is based on data of a large scale experimental study, which includes both real and falsified interviews. For this experimental study, the interviewers’ payment was subject to two different conditions, namely payment per completed interview and payment per hour. The impact of payment, gender and other interviewers' characteristics is analysed. Empirical results are presented, and a conclusion is drawn regarding the impact of payment scheme on survey data quality.
We consider two separate, but equally problematic measurement concerns that may arise with the use of discontinuous online panel respondents: 1) whether or not repeatedly asking respondents their opinions changes them; and 2) that the high frequency of surveying the same individuals for extrinsic rewards may reduce the overall quality of respondent data. We find evidence of panel conditioning in the form of decreased survey duration times, increased political sophistication and non-differentiation of responses. Furthermore, we provide evidence that online survey panel respondents are less likely to optimize their responses than respondents in traditional probability modes.
Surveys amongst experts (e.g. Delphi studies) often report the level of agreement amongst respondents by means of Cohen’s Kappa as an indicator of data quality . The three reasons for the use of kappa or its generalisations are 1) the approach respects the qualitative measurement scale of involved variables, 2) it quantifies agreement rather than associations and 3) it adjusts for agreement that occurs solely by chance. In practice the latter reason for using Kappa yields contra-intuitive results. This paper investigates the issue of Kappa being unable to distinguish object-specific characteristics from rater-specific characteristics.