ESRA 2019 Programme at a Glance


With or Without You - Standardised Metadata in Survey Data Management 1

Session Organisers Mr Knut Wenzig (German Institute for Economic Research - DIW Berlin )
Mr Daniel Bela (LIfBI – Leibniz Institute for Educational Trajectories)
Mr Arne Bethmann (Max Planck Institute for Social Law and Social Policy)
TimeThursday 18th July, 09:00 - 10:30
Room D32

With evolving data sources, such as process-generated or user-generated content, meta- and paradata play an increasingly important role in many parts of the data management lifecycle. This is also true for surveys, as they get more complex, and data management relies more on properly defined processes to ensure both data quality and maintainability. In turn, many studies, data providers and data archives have developed systems of structured metadata tailored to their specific data management needs. While some of these systems are (loosely) based on evolving metadata standards like DDI or SDMX, many are custom made solutions. For the goal of making metadata comparable and shareable across studies and institutions this is obviously a less than ideal situation.

In this session we want to discuss the issue from a practitioners view, and want to hear from people who are faced with the challenge of implementing structured metadata systems, or have done so in the past. Particularly, we want to hear about the possible benefits, problems and drawbacks when implementing metadata systems that adhere closely to metadata standards like DDI or SDMX. Possible questions to be discussed would be:

- Which processes would benefit from standardized metadata?
- Are there examples for metadata systems which cover multiple steps within the whole lifecycle?
- Are there sources for shared and reusable metadata?
- Are there tools to process standardized metadata?
- What could be incentives for sharing metadata and tools?

Keywords: metadata, ddi, sdmx

The Questionnaire Design and Documentation Tool (Qddt) - A Tool for Documenting the Questionnaire Design Process Using the Data Documentation Initiative (DDI)

Ms Hilde Orten (NSD - Norwegian Centre for Research Data) - Presenting Author
Mr Stig Norland (NSD - Norwegian Centre for Research Data)
Mr Benjamin Beuster (NSD - Norwegian Centre for Research Data)
Dr Sarah Butt (ESS HQ, City University London)

Based on the complex questionnaire design process of the European Social Survey (ESS), the QDDT has been developed to assist research teams in developing concepts and questions for topical questionnaire modules, as well as for tracking their development history over time.

Using the metadata standard DDI lifecycle as a basis for the conceptual model of the tool allows to maintain metadata components at a detailed level, which facilitates reuse. In this tool care has been taken to structure components in a way that makes components as reusable as possible, both for the purpose of reuse within the same study, or in the same- or different surveys later on.

Before an ESS question is considered ready for the questionnaire it goes through a series of discussion, control and testing steps. Versioning history is important not only for historical reasons, but to know why this version of an item is considered to be more suitable for reuse in a specific context than another one.

The DDI lifecycle basis of the tool facilitates exports to other DDI based question banks and platforms, like for example the Question Variable Bank (QVDB).

The presentation focuses on the core features of the tool, and how DDI has been used in facilitating them.


Documenting the Questionnaire Design Process Using the Questionnaire Design and Documentation Tool (QDDT): Experiences from ESS Round 10

Mr Luca Salini (ESS HQ, City, University of London) - Presenting Author

The European Social Survey has developed over the years a carefully designed model for cross-national questionnaire design, employing a combination of qualitative and quantitative pre-testing strategies during the design process of each rotating module to try and achieve optimal comparability across countries.
The design and development process lasts for almost two years – from the appointment of the successful question module design team through to the release of the source questionnaire for the round. It incorporates expert review from members of the ESS Core Scientific Team as well as the national teams, alongside coding item characteristics to predict their validity and reliability using the Survey Quality Predictor (SQP), cognitive interviewing, advance translation and quantitative testing on omnibus surveys and in a two-nation pilot survey.
In all rounds between ESS Round 4 (2008) and ESS Round 9 (2018), the design and development of rotating modules has been fully documented through a purposely designed question module design template, which shows the concepts underpinning the design of the module as well as the wording of the questions included in the survey.
In ESS Round 10 (2020), for the first time the design and documentation of rotating modules is taking place within a new Questionnaire Design and Documentation Tool (QDDT), allowing to document, version and reuse elements following the Data Documentation Initiative (DDI).
The presentation focuses on early findings, challenges and opportunities deriving from applying a structured metadata approach to the documentation of a complex multi-stakeholder questionnaire design process for an established cross-national social survey.


Creating a Redesigned Questionnaire for the CE Survey Using Colectica

Mr Brett McBride (U.S. Bureau of Labor Statistics) - Presenting Author
Mrs Parvati Krishnamurty (U.S. Bureau of Labor Statistics)

Changes to questionnaire items are routine in surveys. In addition, as surveys attempt to be more flexible to reduce respondent burden, skip patterns become more complex. The documentation of changes to survey items and the interdependencies between the items, and how easily this documentation can be accessed, significantly affect the efficiency with which survey metadata can be queried. The quality and accessibility of this documentation also significantly affect the efficiency of updates to data processing systems needed to accurately process these survey changes. One of the key elements of the comprehensive redesign of the Consumer Expenditure (CE) Survey involves creating an updated and streamlined version of the current CAPI questionnaire. This instrument redesign also provides an opportunity for the CE program to move towards more efficient and comprehensive documentation of survey metadata. Colectica Questionnaires software is being used to create specifications for the new CAPI instrument which will later be programmed into Blaise. Colectica Questionnaires, is part of the Colectica suite of software based on the Data Documentation Initiative (DDI) international standard for describing surveys, and generates various outputs including source code for computer assisted information systems. We will discuss the important features of the CAPI instrument redesign, such as aggregation, question order, use of screeners, record use, as well as the CE program’s experience with using Colectica to generate specifications for the redesigned instrument.


Building a Workflow for Documenting Data and Questionnaires

Dr Hayley Mills (UCL) - Presenting Author
Mr Jon Johnson (UCL and UKDS)

CLOSER brings together eight world-leading UK longitudinal studies in order to maximise their use, value and impact. A major output of CLOSER is the search engine CLOSER Discovery (discovery.closer.ac.uk). CLOSER Discovery allows users to search and browse questionnaire and dataset metadata for researchers to discover what data are available.

Efficient data management of complex longitudinal studies is both desirable and increasingly essential to ensure that data are in the hands of researchers in a timely manner. Metadata standards are critical for straight-forwardly maintaining information through the data life-cycle, from data collection to output for research.

When these metadata are fully documented they can be utilised further to allow new data and metadata management possibilities going forward. For example, capturing Computer-Aided Interview metadata can be used to assist in the processing and validation of data. Auto-processing of data to extract metadata and linking that to the origin questions, whilst reusing the concepts defined in survey design, allows the generation of high quality documentation to accompany data sharing. The outputs can also be used to create reusable resources, such as question banks which allow provenance of questions for other studies to utilise.

CLOSER has been developing a suite of tools and software using both in-house and commercially available solutions that begin to tackle come of the obstacles involved in documenting and utilising longitudinal metadata. The presentation will report on the successes and problems faced in using the DDI-Lifecycle metadata standard to achieve these ambitions.


The Question Variable Database (QVDB) - A Portal for the ESS

Mr Benjamin Beuster (NSD - Norwegian Centre for Research Data)
Mrs Bodil Agasøster (NSD - Norwegian Centre for Research Data) - Presenting Author

The Question Variable Database (QVDB) with the Colectica platform as technical backbone can be described as a system for storage and retrieval of questions and variables, and facilitating reuse of their metadata and metadata components. The overall aim of the QVDB is to serve the ESS, and potentially other survey programmes, in their business processes of specifying, documenting, versioning and disseminating survey data.

The European Social Survey (ESS) is an academically driven cross-national survey that has been conducted across Europe since its establishment in 2001. Every two years, face-to-face interviews are conducted with newly selected, cross-sectional samples.

In particular, this presentation describes how the QVDB was populated with 8 waves of ESS data including the transformation from DDI-Codebook (NESSTAR) into DDI Lifecycle (DDI3.2) and it also gives examples on how variables from different points in time or different datasets correspond to each other (variable harmonization).

There are other tools compatible with the QVDB: the SERISS Questionnaire Design and Development Tool (QDDT) to document and retrieve information on the design process of developing a cross-national survey questionnaire, and the CESSDA Euro Question Bank which provides a central search facility across all CESSDA’s survey questions in different languages. The aim for these tools is to be able to exchange metadata so that metadata collected at one stage of the survey life cycle can be reused in another stage of the survey life cycle.