Published in the journal Proceedings of the National Academy of Sciences (PNAS), the study explores the ability of a validated text-based machine learning model to predict the likelihood of successful replication for more than 14,100 psychology research articles published since 2000 across six top-tier journals.
Undertaken in partnership with the University of Notre Dame, France, and Northwestern University, US, the study identifies several factors that increased the likelihood of research replicability – that is, the likelihood that if a study is conducted a second time using the same methods, the results would be the same.
Overall, the authors found that experimental studies were significantly less replicable than non-experimental studies across all subfields of psychology. The authors found that mean replication scores - the relative likelihood of replication success - were 0.50 for non-experimental papers, compared to 0.39 for experimental papers, meaning that non-experimental papers are around 1.3 times more likely to be reproducible.
Dr Youyou Wu and co-authors say that this finding is worrying, given that psychology’s strong scientific reputation is at least partly built on its proficiency with experiments.