Unlock the Lab: Your guide to reading science like a scientist cover art

Unlock the Lab: Your guide to reading science like a scientist

Unlock the Lab: Your guide to reading science like a scientist

Listen for free

View show details

About this listen

🪄 Created using NotebookLM, with all the benefits and blind spots of human editing.

In this episode of Codex Mentis, we explore the underlying machinery of scientific truth to understand how research reaches the public and why a healthy dose of scepticism is vital for its evaluation. The conversation begins with an overview of Dr Pablo Bernabeu’s interactive web application which uses a unique peer-anchored design to help users calibrate their judgements by predicting community standards across forty-eight fictional research scenarios. We discuss how this tool trains participants to identify critical red flags such as predatory publishing models, underpowered sample sizes and overblown conclusions that often mask mundane data behind sensationalised media narratives. Transitioning to real-world research integrity, the episode reviews a systematic meta-analysis quantifying the prevalence of misconduct and explores the pervasive culture of silence revealed by the stark discrepancy between those admitting to questionable research practices and those witnessing them in colleagues. We examine granular behaviours like hypothesising after results are known (HARKing) and salami publication before explaining the randomised response technique which is a mathematical method used in large-scale surveys to elicit honest answers about sensitive misconduct. The discussion also addresses qualitative findings that characterise academia as a 'bad barrel' where systemic 'publish or perish' pressures and an over-reliance on journal impact factors actively discourage the publication of valid negative results. Finally, we analyse a massive quantitative study of over forty-one million papers revealing a structural paradox where artificial intelligence tools accelerate individual careers and impact while simultaneously contracting the collective focus of science by automating established centres of knowledge rather than exploring unknown frontiers.

Further details on the app and the workshop are available at https://pablobernabeu.github.io/applications-and-dashboards/unlock-the-lab

References (in order of appearance)

Bernabeu, P. (2026). Unlock the Lab: Your guide to reading science like a scientist (Version 1.0.0) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.19153148

Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and Engineering Ethics, 27(4), Article 41. https://doi.org/10.1007/s11948-021-00314-9

Larsson, T., Plonsky, L., Sterling, S., Kytö, M., Yaw, K., & Wood, M. (2023). On the frequency, prevalence, and perceived severity of questionable research practices. Research Methods in Applied Linguistics, 2(3), Article 100064. https://doi.org/10.1016/j.rmal.2023.100064

Gopalakrishna, G., ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. PLoS ONE, 17(2), Article e0263023. https://doi.org/10.1371/journal.pone.0263023

Bruton, S. V., Medlin, M., Brown, M., & Sacco, D. F. (2020). Personal motivations and systemic incentives: Scientists on questionable research practices. Science and Engineering Ethics, 26(3), 1531–1547. https://doi.org/10.1007/s11948-020-00182-9

Hao, Q., Xu, F., Li, Y., & Evans, J. (2026). Artificial intelligence tools expand scientists’ impact but contract science’s focus. Nature, 679, 1237–1243. https://doi.org/10.1038/s41586-025-09922-y

No reviews yet