Program
All speakers for the Winter School are confirmed. However, please note that the program schedule may be subject to minor changes prior to the event.
Day 1 (Monday, 17/02)
09:00 – 10:30 Registration and welcome breakfast
10:30 – 12:30 Official opening on Old and New Directions in Behavioral Research
- Carlo Umiltà – Beware: What Went Wrong Before is Likely to Go Wrong Again
- Dorothy Bishop – How our Brains Make it Difficult to Think Like a Scientist
- Daniël Lakens – Which Research is Valuable Enough to Do Well?
12:30 – 13:30 Light lunch
13:30 – 15:00 Daniël Lakens – Concerns about Replicability, Theorizing, Applicability, Generalizability, and Methodology across Two Crises in Psychology
15:00 – 15:30 Coffe break
15:30 – 16:00 Presentation of Psicostat and group activities
16:00 – 17:30 Filippo Gambarota – Open Science Starter Pack (R, Quarto, GitHub, OSF, and how to integrate them).
Day 2 (Tuesday, 18/02)
09:00 – 10:30 Daniël Lakens – The Benefits of Preregistration and Registered Reports (see related article)
10:30 – 11:00 Coffe/Tea break
11:00 – 12:30 Jessica Flake – Measurement, not Schmeasurement (lecture)
12:30 – 13:30 Light lunch
13:30 – 15:30 Jessica Flake – Measurement, not Schmeasurement (workshop)
15:30 – 16:00 Coffe/Tea break
16:00 – 18:00 Group activities supervised by experts
Day 3 (Wednesday, 19/02)
09:00 – 10:30 Ettore Ambrosini – From Noise to Insight: Improving Neuroimaging Measurement for Credible and Reproducible Neuroscience
10:30 – 11:00 Coffe/Tea break
11:00 – 12:30 Lisa DeBruine – Infrastructure for Multi-Lab Studies (lecture)
12:30 – 13:30 Light lunch
13:30 – 16:00 Lisa DeBruine – Infrastructure for Multi-Lab Studies (workshop)
17:30 – 18:45 Guided tour to Palazzo Bo
19:00+ Social dinner
Day 4 (Thursday, 20/02)
09:00 – 10:30 Ulrich Schimmack – A Tutorial on Hierarchical Factor Analysis (see related post)
10:30 – 11:00 Coffe/Tea break
11:00 – 12:30 Giulia Calignano & Livio Finos – Multi-Lab and Multiverse: From Preprocessing to Data Analysis
12:30 – 13:30 Light lunch
13:30 – 15:30 Livio Finos – Post-selection Inference in Multiverse Analysis (see related article, openly available here)
15:30 – 16:00 Coffe break
16:00 – 18:00 Group activities supervised by experts
Day 5 (Friday, 21/02)
09:00 – 10:30 Massimo Grassi & Filippo Gambarota – Music Ensemble: An Example of Multi-lab Investigating a Rare Population (Expert Musicians) (see related preregistration)
10:30 – 11:00 Coffe/Tea break
11:00 – 12:30 Presentation of group activities
12:30 – 13:30 Light lunch
13:30 – 16:00 Presentation of group activities
16:00+ Aperitivo
Day 6 (Saturday, 22/02)
10:00+ to be defined – Walk in Padova downtown (optional and conditional on weather)
Abstracts
this section in progress, some abstracts will be added or revised over the next few weeks
How our Brains Make it Difficult to Think Like a Scientist
Emeritus Profesor of Developmental Psychology, University of Oxford
When scientific studies turn out not to be reproducible, we tend to blame it on lack of expertise or the pressure of perverse incentives to publish. While both these factors are real, it is important to also recognise the role of human cognitive biases that can make it hard to adopt the objective, self-critical mindset that is needed to do good science. I will focus particularly on confirmation bias, need for narrative and asymmetric moral reasoning as factors leading to p-hacking, publication bias and citation bias, and show how a cumulative science will only work if we take steps to counteract them.
Concerns about Replicability, Theorizing, Applicability, Generalizability, and Methodology across Two Crises in Psychology
Eindhoven University of Technology
Twice in the history of psychology has there been a crisis of confidence. The first started in the 1960s and lasted until the end of the 1970s, and the second crisis dominated the 2010s. In both these crises researchers discussed fundamental concerns about the replicability of findings, the strength of theories in the field, the societal relevance of research, the generalizability of effects, and problematic methodological and statistical practices. On the basis of extensive quotes drawn from articles published during both crises, I explore the similarities and differences in these crises in psychology. I review changes in research practices that emerged in response to these concerns, as well as which concerns have not led to noticeable improvements, and will reflect on how we can prevent on what it would take to prevent a third crisis a few decades from now.
Open Science Starter Pack
Department of Developmental Psychology and Socialization, University of Padua
I will present a comprehensive toolkit designed to enhance both the reproducibility and productivity of your research. We’ll start with R, exploring modern approaches and essential packages for data importation, manipulation, and visualization, enabling you to create insightful tables and figures. Next, I will introduce Quarto, a literate programming framework that seamlessly merges code with narrative to produce dynamic reports, papers, and presentations. To ensure transparency and efficiency, I will delve into version control systems with Git and GitHub. Finally, I will integrate all these elements using the Open Science Framework (OSF), which offers an online citable space (with a DOI), transparent version control, license management, and robust tool integrations. This powerful and scalable toolkit is crucial for advancing open science and addressing the reproducibility crisis.
Measurement, not Schmeasurement
Department of Psychology, McGill University
We assume that psychological measures produce meaningful numbers: higher satisfaction scores indeed represent more satisfaction. Measurement is fundamental part of psychological research, and our scores require thorough and transparent evaluation of their validity. These sessions will cover the main sources of validity evidence, how to evaluate and plan validation studies for psychological measures, as well as an opportunity to apply concepts to participants’ areas of research.
From Noise to Insight: Improving Neuroimaging Measurement for Credible and Reproducible Neuroscience
Department of Neuroscience, University of Padua
Accurate and precise measurement is the cornerstone of (neuro)scientific research, directly impacting the validity of theoretical models and data interpretation. I will explore the key challenges in brain activity measures, particularly in EEG, including issues of the influence of variability and noise/artifacts on reliability and validity. We will discuss how recent advances in neuroimaging techniques and open science practices are addressing these challenges in an effort to improve the reproducibility and robustness of neuroimaging measurements. By highlighting practical tools and innovative approaches, this presentation aims to provide insights into overcoming measurement limitations, strengthening the methodological rigor, and ultimately enhancing the credibility and trustworthiness of neuroimaging data in neuroscience research.
Infrastructure for Multi-Lab Studies
School of Psychology & Neuroscience, University of Glasgow
This session is aimed at people wanting to start a big team science group from the grassroots, with little or no financial or administrative support. It will cover the organisational aspects of how you recruit members, set up governance, communicate with the team, set up and use social media accounts, use tools for collaboration, and set up a website.
Music Ensemble: An Example of Multi-lab Investigating a Rare Population (Expert Musicians)
Massimo Grassia, Filippo Gambarotab
aDepartment of General Psychology, University of Padua; bDepartment of Developmental Psychology and Socialization, University of Padua
We will present our experience with the multilab study “Music Ensemble,” which investigates the working memory of musicians and non-musicians, now featuring more than 30 units/labs. Music Ensemble is an interesting case study because it explores a rare population (i.e., active expert musicians with more than 10 years of continuous musical training), a format could be applied to other rare populations (e.g., patients). We will discuss the origin of the idea, the initial steps of recruiting a core team of expert scientists, setting the protocol, and recruiting the data collection units. We will outline the various decisions and options considered throughout the process. Particular attention will be given to the power analysis performed within a meta-analysis framework, considering the multilevel data structure of the study (i.e., multilab).
PIMA: Post-selection Inference in Multiverse Analysis
Department of Statistical Sciences, University of Padua
The Post-selection Inference approach to Multiverse Analysis (PIMA) is a flexible and general inferential method that considers all possible models within a multiverse of reasonable analyses. While multiverse analysis describes the variability of results, PIMA introduces an inferential procedure to test whether a given predictor is associated with the outcome by combining information from all possible models. The approach incorporates selective inference techniques, ensuring that the null hypothesis can be confidently rejected for any specification that demonstrates a significant effect after multiplicity corrections. During the presentation, we will also include R code examples and a discussion of practical results.
From Uncertainty to Insight: Multiverse Analysis in Multi-Lab Collaborative Research
Department of Developmental Psychology and Socialization, University of Padua
The multiverse approach, when applied to multi-lab studies, strengthens research by exploring multiple data analysis paths coming from different laboratories tradition. Instead of relying on a single pipeline, it embrace variability to investigate how different preprocessing choices affect outcomes. In some multi-lab contexts, where variations in setups and procedure naturally occur, this approach ensures results are more robust and generalizable.By embracing methodological diversity, the multiverse approach shifts focus from finding one “correct” method to understanding a range of plausible outcomes. It promotes transparency and reduces the likelihood of false positives. Combined with multi-lab collaboration, it enhances the reproducibility of findings and provides a clearer, more reliable and shared understanding of the effect under investigation across diverse contexts.