Short Courses
Please note that only short courses with at least 10 participants will take place.
Monday 14th July morning (exact time TBD)
Björn Rohr & Barbara Felderer: Representation Bias in Probability and Non-Probability Surveys – Theoretical Considerations and Practical Applications Using the sampcompR R-Package
Liam Wright, Richard Silverwood, Georgia Tomova: Methods for Handling Mode Effects in Analyses of Mixed-Mode Survey Data
Laura Fumagalli and Thomas Martin: Survey experiments: principles and applications
Caroline Roberts, Patrick Sturgis: Integrating Large Language Models in the Questionnaire Design Process
Monday 14th July afternoon (exact time TBD)
Lydia Repke & Christof Wolf: Collecting Data on Networks and Social Relationships with Social Surveys
Joris Frese: Quasi-Experiments with Surveys: The Unexpected Event during Survey Design
Joshua Claassen & Jan Karem Höhne: Web tracking: Augmenting web surveys with data on website visits, search terms, and app use
Matthias Roth & Daniil Lebedev: Survey response quality assessment: Conceptual approaches and practical indicators
Larissa Pople: Understanding Young Voices: Engagement, Ethics and Measurement in Surveys
Course Descriptions
Representation Bias in Probability and Non-Probability Surveys – Theoretical Considerations and Practical Applications Using the sampcompR R-Package
Instructor:
Björn Rohr and Barbara Felderer, GESIS
Time:
Morning
Room:
TBD
Course Description:
This short course discusses the emergence and analysis of representation bias in surveys. The first part of the course discusses the errors that can occur in every step of a (non)-probability survey and how these errors may lead to representation bias in the analyses of survey data. A special focus will be on non-probability surveys and the question of when they are fit for purpose. The second part of the course covers bias analysis, introducing commonly used measures and their application in R. In the practical part of the course, bias analysis will be conducted on univariate, bivariate, and multivariate levels using the sampcompR R-package that was specifically written to study representation bias. A synthetic dataset will be provided to perform the exercises, but participants are welcome to bring their own datasets to conduct bias analysis. The course will be on the beginner to intermediate level. Experience with R is helpful but not required.
Bio:
Björn Rohr is a member of the survey statistics team at GESIS – Leibniz Institute for the Social Sciences. His research focuses on survey methodology, more specific, the comparison of surveys regarding bias. He has a special focus on comparing non-probability surveys and probability surveys.
Dr Barbara Felderer is the head of the survey statistics team at GESIS – Leibniz Institute for the Social Sciences. The first focus of her research is survey methodology, especially nonresponse and nonresponse bias. The second focus is (survey) statistics, currently especially causal machine learning methods and their application to improve survey quality
Methods for Handling Mode Effects in Analyses of Mixed-Mode Survey Data
Instructor:
Liam Wright, Richard Silverwood, Georgia Tomova, CLS, UCL
Time:
Morning
Room:
TBD
Course Description:
Surveys are increasingly adopting mixed-mode methodologies. Due to differences in how items are presented, responses can differ systematically between modes, a phenomenon referred to as a mode effect. Unaccounted for, mode effects can introduce bias in analyses of mixed-mode survey data. Several methods for handling mode effects have been developed but these have mainly appeared in the technical literature and vary in their ease of implementation. Further, the assumptions these methods make (typically, no unmodelled selection into mode) can be implausible. To improve adoption of methods for handling mode effects, in this interactive short course we will provide background on the problem of mode effects by placing it within a simple and intuitive Causal Directed Acyclic Graphs (DAGs) framework. Using this framework, we will then describe the main methods for handling mode effects (e.g., regression adjustment, instrumental variables, and multiple imputation) and introduce a promising but underutilised approach, sensitivity analysis, which uses simulation and does not assume no unmodelled selection into mode. Finally, we will show users how to implement sensitivity analysis with a hands-on R tutorial using real-world mixed-mode data from the Centre for Longitudinal Studies’ (CLS) birth cohort studies. By the end of the session attendees will:
• Understand why mode effects can cause bias in analyses of mixed-mode data.
• Be able to draw DAGs that represent assumptions about mode effects.
• Use DAGs to design an analysis of mixed-mode data and to identify the biases that may appear in such an analysis.
• Understand methods for handling mode effects, including sensitivity analysis.
• Be able to implement sensitivity analysis within the software package R.
Activities will include:
1. Exercises drawing and interpreting DAGs that illustrate the issue of mode effects.
2. An R practical on implementing methods for handling mode effects using CLS cohort data.
Bio:
Liam Wright: Liam is Lecturer in Statistics and Survey Methodology at the Centre for Longitudinal Studies (CLS), University College London. Liam is Principal Investigator on the Survey Futures project Assessing and Disseminating Methods for Handling Mode Effects. Liam has experience creating tutorials on methods for handling mode effects, as well teaching programming skills. Most recently he has co-authored user-friendly guidance (with Richard Silverwood) on accounting for mixed-mode data collection for users of CLS’ cohort data.
Richard Silverwood is Associate Professor in Statistics at CLS. In addition to researching and producing guidance on mode effects, Richard is Chief Statistician for CLS’ cohort studies. He has wide-ranging expertise across many aspects of survey methodology, most notably missing data. Richard leads training at CLS and oversees the production of methods guidance for CLS’s data users. He is also co-investigator on the Survey Futures project Assessing and Disseminating Methods for Handling Mode Effects.
Georgia Tomova is Research Fellow in Quantitative Social Science at the Centre for Longitudinal Studies, University College London, where she works on the Survey Futures project Assessing and Disseminating Methods for Handling Mode Effects. Georgia’s previous experience includes both methodological and applied research in the nutrition domain, with a particular focus on the theory and application of causal inference methods. She also has extensive teaching experience, including lecturing on the renowned Introduction to Causal Inference Course in Leeds.
Survey experiments: principles and applications
Instructor:
Laura Fumagalli and Thomas Martin, University of Essex and University of Warwick
Time:
Morning
Room:
TBD
Course Description:
Survey experiments are becoming increasingly popular in many disciplines, such as survey methodology, economics, sociology, and politics. This course aims to equip participants with the skills to independently design and conduct high-quality survey experiments in their fields of research or industry. Learning Objectives:
By the end of the course, participants will:
1. learn the key principles of survey experiments, including how to use them to carry out causal inference.
2. learn how to elicit individuals’ subjective beliefs and analyse the role they play in decision making.
3. engage with key references from recent literature with a particular focus on information provision experiments.
4. learn how to practically implement a survey experiment from design and survey creation to data analysis and write-up of results.
Activities:
• Lecture: provide theoretical foundations and applications through existing examples of survey experiments across various fields in social sciences.
• Workshop: engage students in designing, implementing, and analysing their own survey experiments. Students will be introduced to survey platforms (e.g., Qualtrics) and statistical software (e.g., Stata) to analyse experimental data. Level:
This is an intermediate-level course, appropriate for researchers with experience of introductory research methods. No prior experience with survey experiments is required, but participants should be familiar with statistical concepts such as regression analysis.
Bio:
Laura Fumagalli is a Research Fellow at ISER, where she has been part of the Understanding Society (the largest panel survey in the UK) team for over 10 years. She has taught courses in public economics, statistics, survey methods, and panel data using Understanding Society. She has publications in multi-disciplinary journals, including: Journal or the Royal Statistical Society A, The Economic Journal, Journal of Economics Behavior and Organization, and Labor Economics.
Thomas Martin, Department of Economics, University of Warwick. Thomas is an Associate Professor (Teaching Track) and has worked at Warwick for over 10 years teaching Econometrics and Development Economics both at the Undergraduate and Postgraduate level. He has publications in multi-disciplinary journals such as World Development.
Integrating Large Language Models in the Questionnaire Design Process
Instructor:
Caroline Roberts, Institute of Social Sciences, University of Lausanne, Switzerland;
Patrick Sturgis, Department of Methodology, London School of Economics
Time:
Morning
Room:
TBD
Course Description:
Effective questionnaire design remains one of the greatest challenges in survey research, requiring a mix of scientific expertise and artistic skill, as well as evaluation and testing. Questionnaires – including how they are administered and how respondents interpret and respond to them – constitute a major source of survey error, but one that can be addressed at relatively low cost. An extensive literature on survey methodology provides guidance on the various pitfalls of poor question formulation, on optimal design choices to improve measurement quality, and on methods available for ensuring research objectives are met, while burden on respondents in minimised. Added to this, recent advances in the field of generative artificial intelligence (GenAI) – notably, increasingly powerful Large Language Models (LLM’s) and chatbots – now provide a new, and ever-expanding, range of tools that can be integrated into different phases of questionnaire development. These not only offer researchers opportunities to save time, but also the potential to optimise the formulation of survey questions. However, as research to validate the effectiveness of such tools remains in its infancy, their integration in the questionnaire design process should be handled in a critical way, based on background knowledge of both the scientific principles and craft of effective social measurement. This course, aimed primarily at beginners, aims to: 1) to present an overview of principal questionnaire design challenges, best-practice guidelines for writing effective questions, and frameworks for evaluating potential sources of error; and 2) to introduce available AI tools and ways they can be integrated at different stages of questionnaire development. Participants will work on practical examples of different types of survey question, to evaluate question problems and identify ways to improve them.
Learning objectives:
At the end of the course, students should be able to:
1. Describe the major challenges of writing effective survey questions, based on theoretical frameworks for identifying potential sources of error;
2. Complete steps involved in writing and evaluating survey questions drawing on best-practice guidelines aimed at minimising measurement error.
3. Integrate AI tools at different stages of questionnaire development and evaluate outputs critically.
Bio:
Caroline Roberts is a senior lecturer in survey methodology and quantitative research methods in the Institute of Social Sciences at the University of Lausanne (UNIL, Switzerland), and an affiliated survey methodologist at FORS, the Swiss Centre of Expertise in the Social Sciences. At UNIL, she teaches courses on survey research methods, questionnaire design, public opinion formation and quantitative methods for the measurement of social attitudes. She has taught a number of summer school and short courses on survey methods, questionnaire design, survey nonresponse, mixed mode surveys, and data literacy. At FORS, she conducts methodological research in collaboration with the teams responsible for carrying out large-scale academic surveys in Switzerland. Her research interests relate to the measurement and reduction of survey error. Her most recent research focuses on attitudinal and behavioural barriers to participation in digital data collection in surveys, and ways to leverage generative AI in questionnaire design, evaluation and testing. Caroline is currently Chair of the Methods Advisory Board of the European Social Survey and was President of the European Survey Research Association from 2019-2021.
Patrick Sturgis is Professor of Quantitative Social Science and Head of Department in the Department of Methodology at the London School of Economics. He was previously Director of the ESRC National Centre for Research Methods at the University of Southampton 2010-2019. His research focuses on applied quantitative and statistical methods, with a particular specialism in survey design and analysis. He was President of the European Survey Research Association 2011-2015 and has published widely on in leading methodology journals including Journal of the Royal Statistical Society, Public Opinion Quarterly, and Journal of Survey Methods and Statistics. He has served as Chair of the Methodological Advisory Board of the European Social Survey and the UK Household Longitudinal Survey. He is currently Principal Investigator of ‘Harnessing Generative AI for Questionnaire Design, Evaluation and Testing’, a research grant under the ESRC Survey Futures programme with Dr Caroline Roberts and Dr Tom Robinson.
Collecting Data on Networks and Social Relationships with Social Surveys
Instructor:
Lydia Repke and Christof Wolf, GESIS
Time:
Afternoon
Room:
TBD
Course Description:
This workshop introduces participants to essential concepts and methodologies for collecting data on social networks and relationships using social surveys. The course is structured into three parts.
1. Introduction.
Participants will first explore basic conceptual aspects of social networks, including egocentric versus sociocentric networks, network composition, and structure. In addition, common theoretical concepts, such as transitivity and the strength of weak ties, will be covered. This part also highlights potential research areas and questions where social networks play a central role.
2. Data collection for egocentric networks in surveys.
This part focuses on best practices for collecting egocentric network data and deriving analytical measures. First, we will discuss different name-generation approaches, their advantages, and limitations. Next, participants will learn about name and edge interpreter items and get practical advice on their selection and design. Then, we will demonstrate how to derive compositional measures, structural measures, and a combination of both and provide examples of how this data can be used in empirical social network research.
3. Further measures for social networks and relationships.
Collecting egocentric network data requires comparatively many items and a lot of questionnaire time, making it impractical for some studies to incorporate these instruments. Moreover, some research may focus on other aspects of social embeddedness, such as social support. Therefore, we will highlight some established scales for measuring these concepts, offering alternatives when egocentric network data collection is neither feasible nor necessary.
By the end of the workshop, participants will understand the theoretical, conceptual, and empirical aspects of collecting data on social networks and relationships. They will be equipped to critically assess the merits and limits of different methodological approaches and apply them in their research projects.
Bio:
Dr. Lydia Repke is a social scientist leading the Survey Quality Predictor (SQP) project and heading the Scale Development and Documentation team at GESIS. She is a member of the Young Academy of the Academy of Sciences and Literature | Mainz, Germany. Her research interests include data quality of survey questions, egocentric networks, and multiculturalism.
Dr. Christof Wolf majored in sociology at Hamburg University and obtained his doctorate in sociology from the University of Cologne. He is currently President of GESIS and Professor for Sociology at Mannheim University. His research interests include social networks and health.
Quasi-Experiments with Surveys: The Unexpected Event during Survey Design
Instructor:
Joris Frese, European University Institute
Time:
Afternoon
Room:
TBD
Course Description:
The Unexpected Event during Survey Design (UESD) has taken the social sciences by storm. The 2020 article by Munoz, Falco-Gimeno & Hernandez introducing the method is already one of the most cited articles in political analysis and research based on this method has now been published in all the top political science journals (and in many other disciplines such as economics or sociology). The basic premise of the UESD is simple: you analyse survey data that was fielded shortly before and
after an unexpected and influential event (such as a terrorist attack). Under certain conditions,
respondents interviewed right before and right after the event can be assumed to only systematically
differ in their (exogenous) exposure to this event. If all the relevant assumptions are met, researchers
can then estimate the causal effects of exposure to this event on relevant (political or social)
attitudes. For example, many political scientists have used this method to demonstrate “rally-aroundthe-flag” effects following terrorist attacks. In this short course, I will walk the participants through the established workflow for UESD projects.
We start by discussing some high-profile UESD applications of recent years. Next, I showcase the basic
assumptions of this method and how to test them. Finally, we conduct an original UESD analysis
based on publicly available survey data to learn the basic empirics and the most common robustness
checks step by step. I will showcase the analysis steps in R, but participants are also free to follow
along with Stata or other software. After the course, all participants will be equipped to conduct their own UESD projects from start to
finish. The course is aimed at beginners who have never used this method before and at intermediate
users who want to broaden their knowledge on the state of the art for UESDs.
Bio:
Joris Frese is a PhD candidate in political science at the European University Institute. In his dissertation, he makes empirical and methodological contributions to the causal analysis of public opinion dynamics following political scandals and catastrophes. He frequently uses the Unexpected Event during Survey Design in his research and is also writing several methodological papers about this method. One of these papers has recently been published in Research & Politics, while another one has been conditionally accepted at Political Science Research and Methods.
Web tracking: Augmenting web surveys with data on website visits, search terms, and app use
Instructor:
Joshua Claassen and Jan Karem Höhne, Leibniz University Hannover, German Centre for Higher Education Research and Science Studies (DZHW), Department of Research Infrastructure and Methods
Time:
Afternoon
Room:
TBD
Course Description:
Web surveys frequently run short to accurately measure digital behavior because they are prone to recall error (i.e., biased recalling and reporting of past behavior), social desirability bias (i.e., misreporting of behavior to comply with social norms and values), and satisficing (i.e., providing non-optimal answers to reduce burden). New advances in the collection of digital trace (or web tracking) data make it possible to directly measure digital behavior in the form of browser logs (e.g., visited websites and search terms) and apps (e.g., duration and frequency of their use). Building on these advances, we will introduce participants to web surveys augmented with web tracking data. In this course, we provide a thorough overview of the manifold new measurement opportunities introduced by web tracking. In addition, participants obtain comprehensive insights into the collection, processing, analysis, and error sources of web tracking data as well as its application to substantive research (e.g., determining online behavior and life circumstances). Importantly, the course includes applied web tracking data exercises in which participants learn how to …
1) … operationalize and collect web tracking data,
2) … work with and process web tracking data,
3) … analyze and extract information from web tracking data.
The course has three overarching learning objectives: Participants will learn to a) independently plan and conceptualize the collection of web tracking data, b) decide on best practices when it comes to data handling and analysis of data on website visits, search terms, and app use, and c) critically reflect upon the opportunities and challenges of web tracking data and its suitability for empirical-based research in the context of social and behavioral science. Previous knowledge on web tracking data or programming skills are not mandatory (beginner level). Participants should bring a laptop PC for the data-driven exercises.
Bio:
Joshua Claassen is doctoral candidate and research associate at Leibniz University Hannover in association with the German Centre for Higher Education Research and Science Studies (DZHW). His research focuses on computational survey and social science with an emphasis on digital trace data.
Dr. Jan Karem Höhne is junior professor at Leibniz University Hannover in association with the German Centre for Higher Education Research and Science Studies (DZHW). He is head of the CS3 Lab for Computational Survey and Social Science. His research focuses on new data forms and types for measuring political and social attitudes.
Survey response quality assessment: Conceptual approaches and practical indicators
Instructor:
Matthias Roth and Daniil Lebedev, GESIS
Time:
Afternoon
Room:
TBD
Course Description:
This short course will introduce participants to conceptual and practical approaches to assessing survey response quality, focusing on commonly used response quality indicators such as response patterns, response styles, response times and others. The course covers approaches to assess interviewer behaviour, inattentive or careless responding, and satisficing in both face-to-face and self-completion surveys. These frameworks will be presented through different dimensions of survey data quality – accuracy, representativity, validity and reliability. We will explore the theoretical foundations for evaluating response quality using survey data, probing questions, and paradata, aligned with these frameworks.
In addition to understanding the theoretical underpinnings of response quality, participants will engage in practical exercises that include working with the R packages resquin, psych and others to calculate response quality indicators using real-world datasets. Participants will learn how to create graphical representations of the calculated response quality indicators and how to flag low-quality responses. Special attention will be given to the strengths and limitations of response quality indicators in different survey modes.
The workshop is designed for researchers and practitioners in survey methodology who seek to improve the accuracy, representativity, and overall quality of their data. By the end of the course, participants will have gained insights and skills to apply response quality assessment techniques in their own survey research.
This course is targeted at an intermediate level, ideal for those with a basic understanding of survey methodology and R.
Bio:
Matthias Roth is a doctoral researcher in the Team Scale Design and Documentation at GESIS – Leibniz Institute for the Social Sciences in Mannheim, Germany. In his thesis, he focuses on psychometric approaches to survey data harmonization and measurement. Additionally, he develops the R package resquin which provides survey researchers with convenient functions to calculate response quality indicators.
Dr. Daniil Lebedev is a Postdoctoral Researcher in the Cross-Cultural Survey Methods team at GESIS – Leibniz Institute for the Social Sciences in Mannheim, Germany. He works on quality reporting and fieldwork monitoring for the European Social Survey as part of the ESS Core Scientific Team. His research focuses on data quality in web surveys, response patterns, and the use of paradata to study respondent behavior during survey completion as well as on the effect of the mode of data collection on data quality. Daniil has been a member of the European Survey Research Association (ESRA) Board since 2021.
Understanding Young Voices: Engagement, Ethics and Measurement in Surveys
Instructor:
Larissa Pople, CLS, UCL
Time:
Afternoon
Room:
TBD
Course Description:
Background: With respect to children’s perspectives and experiences, it is increasingly recognised that self-reported data from children should be considered the ‘gold standard’. There is accumulating evidence that child and parental accounts do not always coincide, especially in relation to children’s thoughts and feelings, or risky behaviours that might be concealed from parents. Thus, direct surveys of children provide valuable insights into the reality of children’s lives. Learning objectives: This course is an introduction to the fundamentals of designing surveys for children and young people. It focuses on three key aspects of the survey design process that require careful consideration in surveys involving children: questionnaire design and measurement, participant engagement and ethics. Participants will be provided with an overview of key steps of the survey design process that differ when respondents are children as opposed to adults, including: using qualitative methods to explore the relevance and suitability of topics; formulating well-worded questions and response scales; considering question order and flow; evaluating sources of measurement error; selecting appropriate mode(s) for data collection; developing age-appropriate participant materials; engaging ‘hard-to-reach’ groups; considering the role of parents and other adults as gatekeepers; and developing ethical practice that enshrines key principles such as informed consent, confidentiality, safeguarding and participant well-being.
Activities: Interactive elements will enable course participants to reflect upon key issues inherent in surveying children and young people, such as how to collect high-quality data that will be used by researchers and policy analysts, and how to address the real-life ethical challenges that can arise when children are involved as survey participants.
The Millennium Cohort Study (age 7, 11, 14 and 17 sweeps) will be used as the main survey example – alongside other studies that have foregrounded children’s voices in data collection – to illustrate key considerations central to the design process.
Bio:
Dr Larissa Pople is a Senior Research Fellow / Survey Manager at the UCL Centre for Longitudinal Studies, where she is also a seminar leader on the Survey Design module within the MSc Social Research Methods programme. Previously she worked for over 10 years as a Senior Researcher at The Children’s Society, the Police Foundation and UNICEF, where she led research programmes on well-being and childhood poverty, and co-authored numerous policy-focused publications, including several Good Childhood Reports, a book on children’s experiences of problem debt and chapters on youth crime and antisocial behaviour. Her research expertise lies in survey and qualitative research with children, as well as child-reported indicators of socioeconomic disadvantage, family relationship quality and subjective well-being, which was the topic of her PhD, gained from the University of Essex.