ESRA 2017 Programme

Tuesday 18th July      Wednesday 19th July      Thursday 20th July      Friday 21th July     

     ESRA Conference App

Friday 21st July, 11:00 - 12:30 Room: F2 106

Survey Research in the Developing World: A Transferral of Western Methods or a Context-Driven Approach?

Chair Mrs Leila Demarest (KU Leuven )

Session Details

Development actors such as the UN and The World Bank have been increasingly interested in survey projects in variousThird World countries. One large-scale initiative concerns the Demographic and Health Surveys (DHS), which are funded by USAID and are conducted in various development countries. Yet many smaller-scaled projects are also conducted with regard to, for example, local food security, water provision etc. Besides this ever-increasing interest in surveys to acquire socio-economic information for project targeting, research interests are also directed at political attitudes and behaviours of developing countries' populations. For example, developing countries are increasingly being covered by the World Values Survey, and region-specific opinion surveys have been conducted regularly (e.g. Afrobarometer, Asian Barometer).
This panel welcomes papers that address current challenges of survey research in the developing world and propose innovative ways of overcoming them. We specifically welcome contributions that challenge Western textbooks on survey methodology and reshape their recommendations in a creative way to apply them to their specific country contexts. Contributions can focus on the sampling and response level, but also on the level of questionnaire development. With regard to sampling we are interested in classical problems such as the lack of a sampling frame (including access to difficult populations), problems of non-response, methods to reduce non-response, (post-)stratification, etc. With regard to questionnaire development, we aim to challenge the practice of transferring items with a strong history in Western survey research to developing country questionnaires. These items can concern socio-demographic and economic variables. Well-known examples here are for example the age item, which is recommended to be questioned by using the birth year in Western surveys, while this is far less applicable for some respondent groups in the Third World; and the blurred understanding of ‘family’ and ‘household’ terms in some cultural contexts. Yet, we can also expect important divergences occurring for questions concerning (political) opinions and attitudes. Contributions can also focus on problems of translation and measurement equivalence, question comprehension among local populations, and interviewer effects.

Paper Details

1. Survey Attitudes in the Middle East and Arab Gulf: Hindrance or Help?
Dr Justin Gengler (SESRI, Qatar University)

The past decade has witnessed a marked expansion in the number and scope of opinion surveys being conducted in the Arab world generally, and in the Arab Gulf region specifically. This burgeoning use of survey methods in the Gulf owes to a confluence of factors, including improved institutional capacity; a more permissive regulatory environment; the proliferation of commercial survey research firms; practical restrictions on conducting surveys elsewhere in the Arab world due to continuing political instability; and an increased desire among decisionmakers and analysts to gauge popular attitudes and understand citizen and resident behavior related to important public policy matters.

Yet, even as their frequency and scope increase, questions remain about the impact of this quite recent introduction of social scientific, health, and other surveys into a new social and cultural environment. How do Arab Gulf citizens and residents perceive their own participation in survey research, as well as the practical results that stem from it? Are surveys viewed as a public benefit, or as burdensome and intrusive? Do these surveys serve as a reference point when Gulf publics think about popular opinion and trends in culture, health, and other domains, or are their findings largely unknown or ignored? Finally, which individual-level factors – demographic, socioeconomic, attitudinal, or experiential – help explain differences in individual orientations toward surveys?

To begin to answer these and related questions, we assess survey attitudes among citizens and residents of Qatar using the Survey Attitude Scale of De Leeuw et al. (2010). Preliminary results show, contrary to stereotypes about insular and conservative Arab populations, that it is in fact expatriates from Western countries who least enjoy and value opinion surveys from among individuals living in Qatar. Likewise, East Asian and South Asian respondents express more worry over the potential privacy concerns of surveys that other cultural groups in Qatar. These and other results suggest that Arab respondents as not less favorably oriented toward surveys than members of other cultural groups. To our knowledge, this study represents the first systematic assessment of survey attitudes in an Arab country.

2. "No Opinion responses" in the NSS survey in Ghana: Satisficing or non-opinions?
Mr Maarten Schroyens (KU Leuven)
Professor Arnim Langer (KU Leuven)
Professor Bart Meuleman (KU Leuven)

When surveying attitudes towards sensitive topics, researchers face the dilemma of including "no opinion" options and/or neutral middle points in answering scales. Advocates of neutral scale points and "no opinion" options argue that respondents do not necessarily have an outspoken opinion on every issue and forcing them to take a stand will only lead to the measurement of non-opinions. Opponents of such options draw on the satisficing literature to argue that some respondents will use them as an ‘easy-out option’, complying with the basic requirements of the question while minimizing the mental effort and time spent on the survey. In the literature, the selection of neutral middle points and straight-lining is mostly categorized as weak satisficing given that these responses still provide a usable value on the survey item. Selecting a no-opinion option is classified as strong satisficing because it does not provide a useable response to the survey item. In this paper we aim to evaluate to what extent satisficing occurs in a non-western context where survey fatigue is less pronounced and respondents are less familiar with the survey format. We analyze the responses to sensitive questions on ethnic stereotypes in a large scale online survey with Ghanaian students from three regionally spread public universities (n = 2975). Respondents are asked to rate a number of ethnic groups in Ghana on four personality characteristics: laziness, honesty, generosity and intelligence. Both no-opinion options and neutral scale points were included in the survey items. We use paradata (timer questions, user agent strings, height and conditionality of incentives) and substantial survey responses to gain insight into when and why a no-opinion option or neutral scale point is selected.

3. Evaluating SMS-Based Surveys in an African Context
Dr Stacey Giroux (Indiana University)
Dr Tom Evans (Indiana University)
Dr Kurt Waldman (Indiana University)
Mr Jacob Schumacher (Indiana University)

The development of practices for surveying African populations is advancing quickly. Surveys delivered via SMS look to be one of the more promising avenues for reaching many people on the continent with high-frequency data collection where remoteness poses obstacles to frequent in-person interviews. Increased internet access has resulted in survey methods for web and smartphone-based surveys, but there has been less attention given to SMS-based methods and data quality assessment. This is a crucial area for research, given that SMS-based methods are the most suitable solution in Africa and other developing countries at this time.

One of the overarching goals of our research program is to assess the adaptive capacity of smallholder agriculturists in Zambia and Kenya to hydrologic shocks such as drought, which seriously impact their levels of food security. To this end, our research team has been conducting weekly SMS surveys with smallholder farmers in parts of Zambia since 2013 and Kenya since 2015. In the case of Zambia, we have also conducted annual in-person surveys since 2012, and in Kenya, we conducted in-person surveys annually from 2012 – 2014, and have another field trip planned for 2017. Previous SMS-based data collection by World Food Programme (WFP) and other development agencies have focused on food security and health applications, but these efforts have not been coupled with in-person interviews. Our research methods, combining high-frequency SMS surveys with in-person interviews, over years, in multiple field sites, allow for a fuller examination of SMS-based methods. Regular fieldwork campaigns have also helped attenuate the effects of some of the significant tradeoffs between in-person and SMS-based surveys, those of survey length and complexity. We gain a fuller understanding of the challenges smallholder farmers face when we combine this kind of high-frequency data collection with infrequent, but richer, in-person survey data.

Our work with local partners, coupled with regular in-person interviews with farmers, have helped us refine our SMS methodology at all stages, from survey design and dissemination to understanding nonresponse. We use TextIt, an SMS application, to build and administer the surveys. We employ targeted sampling strategies for our population of interest, using face-to-face recruitment by members of the research team and our local partners. We enjoy relatively high response rates, but also find troubling rates of respondents dropping in and out of the weekly surveys. We have begun conducting follow up phone calls with nonrespondents in Kenya to better understand and quantify the reasons for lack of response, which range from the technical to respondents simply forgetting.

In this paper we describe our methodology and provide an assessment of the SMS survey data quality in terms of nonresponse and usability of the data. We contrast our work to that of larger research efforts by organizations such as the WFP, and offer brief recommendations for those thinking about employing individual or household-level surveys via SMS with specific populations in an African context.

4. An Assessment of Interviewer Error in the Afrobarometer Project
Miss Leila Demarest (KU Leuven)

This paper addresses interviewer errors in the Afrobarometer project, which conducts opinion surveys of citizens' political attitudes and behaviour in an increasing number of African countries.The survey project started in 1999 in 12 Sub-Saharan countries. The sixth round was completed in 2015 for over 35 countries, including some countries in North-Africa. Afrobarometer data have been used for an increasing number of academic publications over time. Given the expansion of the collection and use of Afrobarometer data over time, it is somewhat surprising that only a limited number of studies have focused on data quality within the project. This is in stark contrast with Western projects such as the European Social Survey, for example, for which a rich methodological literature is available. It is also surprising as the project faces a number of challenges common to developing countries in general such as the lack of a sampling frame and the use of the random walk method, multi-lingual and diverse societies, difficult to reach populations, and uneducated or illiterate respondents. All these challenges can have implications for survey errors. In this paper, I focus in particular on the role of the interviewer in the Afrobarometer survey design and implications for nonresponse as well as measurement error. Empirical analyses rely on data for the 12 original Afrobarometer countries for Rounds 3, 4, and 5.
Response rates for Afrobarometer surveys are generally high to very high (80 to 100%). This could indicate that nonresponse is not generally an issue for Afrobarometer surveys. However, the random walk method does give leeway to the interviewer in choosing and selecting respondents. I show this by analyzing to what extent achieved samples differ from census results for age-gender groups. For all countries, samples differ significantly from census findings. Interestingly, the age-gender groups that are over/under-represented are not the same for all countries. Nonetheless, there does seem to be some evidence of a trend in that younger age groups who are perhaps busy at work are underrepresented.Turning to measurement error, I calculate Intra-Class-Correlations when clustering on the interviewer for factual as well as attitudinal items in the survey. Substantial interviewer effects are found especially for attitudinal items. These results hold when controlling for sampling clustering by fitting cross-classified multilevel logit models (using 2 different estimation techniques).
Results point out the need for increased monitoring and quality control in the Afrobarometer and several policy recommendations are made.