Abstract
This paper explores judgements about the replicability of socialand behavioural sciences research and what drives thosejudgements. Using a mixed methods approach, it draws onqualitative and quantitative data elicited from groups using astructured approach called the IDEA protocol (‘investigate’,‘discuss’,‘estimate’and‘aggregate’). Five groups of fivepeople with relevant domain expertise evaluated 25 researchclaims that were subject to at least one replication study.
Participants assessed the probability that each of the 25 research claims would replicate (i.e. that areplication study would find a statistically significant result in the same direction as the originalstudy) and described the reasoning behind those judgements. We quantitatively analysed possiblecorrelates of predictive accuracy, including self-rated expertise and updating of judgements afterfeedback and discussion. We qualitatively analysed the reasoning data to explore the cues,heuristics and patterns of reasoning used by participants. Participants achieved 84% classificationaccuracy in predicting replicability. Those who engaged in a greater breadth of reasoning providedmore accurate replicability judgements. Some reasons were more commonly invoked by moreaccurate participants, such as‘effect size’and‘reputation’(e.g. of the field of research). There wasalso some evidence of a relationship between statistical literacy and accuracy.