Programme
8.45 am - 9.00 am听
Opening of the Colloquium.
9.00 am - 10.00 am
Abel Brodeur (University of Ottawa 鈥 Online Presentation).
Title:听We Need to Talk about Mechanical Turk: What 22,989 Hypothesis Tests Tell Us about Publication Bias and p-Hacking in Online Experiments.
Abstract:听Amazon Mechanical Turk is a very widely-used tool in business and economics research, but how trustworthy are results from well-published studies that use it? Analyzing the universe of hypotheses tested on the platform and published in leading journals between 2010 and 2020 we find evidence of widespread p-hacking, publication bias and over-reliance on results from plausibly under-powered studies. Even ignoring questions arising from the characteristics and behaviors of study recruits, the conduct of the research community itself erode substantially the credibility of these studies鈥 conclusions. The extent of the problems varies across the business, economics, management and marketing research fields (with marketing especially afflicted). The problems are not getting better over time and are much more prevalent than in a comparison set of non-online experiments. We explore correlates of increased credibility.
10.00 am - 10.30 am
Coffee break.
10.30 am - 12.00 pm
Matt Page (Cochrane, Monash University, Australia).
Title:听The REPRISE project: an evaluation of REProducibility and Replicability In Syntheses of Evidence.
Abstract:听Investigations of transparency, reproducibility and replicability in science have been directed largely at individual studies. It is just as critical to explore these issues in systematic reviews, given their influence on decision-making and future research. In this talk, Dr Page presents data collected for the REPRISE (REProducibility and Replicability In Syntheses of Evidence) project. The objectives of the project were to evaluate in a sample of systematic reviews of interventions: (1) how frequently methods are reported completely, and how often review data and other materials are shared publicly; (2) systematic reviewers鈥 views on sharing review data and other materials and their understanding of and opinions about replication of reviews; (3) the extent of variation in results when we independently reproduce meta-analyses, and; (4) the extent of variation in results when we crowdsource teams to independently repeat the search, selection, data collection and analysis steps of a sample of original reviews.
12.00 pm - 1.00 pm
Lunch
1.00 pm - 2.30 pm
Tom Hardwicke (University of Melbourne)
Title:听Improving science through meta-research听听
Abstract:听From Mars rovers to vaccines, the fruits of the scientific endeavour are all around us. However, research is performed by fallible humans in an ecosystem that often allocates funding, awards, and publication prestige based on the aesthetics of research findings above the accuracy of research methods, creating opportunities for bias to pollute the scientific literature.听In the shadow of scientific success lies听a proliferation of exaggerated and spurious research findings that leads to waste, misinformation, and ineffective or even harmful interventions in applied settings. In this talk, I will present a series of studies that illustrate how听meta-research (鈥榬esearch-on-research鈥) can improve science by identifying problems and testing solutions.听The scientific ecosystem is perpetually evolving; the discipline of meta-research presents an opportunity to use empirical evidence to guide its development and maximize its potential.
2.30 pm 鈥 4.00 pm
Coffee and general discussion.