Codex Mentis: Science and technology to study cognition cover art

Codex Mentis: Science and technology to study cognition

Codex Mentis: Science and technology to study cognition

Written by: Pablo Bernabeu
Listen for free

LIMITED TIME OFFER | Get 2 Months for ₹5/month

About this listen

Codex Mentis, produced by Dr Pablo Bernabeu, offers an exploration into cognitive science and its technologies with the assistance of advanced artificial intelligence. This podcast delves deep into how we think, perceive and interact with the world, dissecting both the profound mysteries of the human mind and the cutting-edge science and technology that illuminate its inner workings. Each episode presents a fascinating journey through diverse aspects of cognition. Beyond the theoretical, Codex Mentis demystifies the methodologies driving cognitive research. Contact: pcbernabeu@gmail.comPablo Bernabeu Science
Episodes
  • Unlock the Lab: Your guide to reading science like a scientist
    Mar 27 2026

    🪄 Created using NotebookLM, with all the benefits and blind spots of human editing.

    In this episode of Codex Mentis, we explore the underlying machinery of scientific truth to understand how research reaches the public and why a healthy dose of scepticism is vital for its evaluation. The conversation begins with an overview of Dr Pablo Bernabeu’s interactive web application which uses a unique peer-anchored design to help users calibrate their judgements by predicting community standards across forty-eight fictional research scenarios. We discuss how this tool trains participants to identify critical red flags such as predatory publishing models, underpowered sample sizes and overblown conclusions that often mask mundane data behind sensationalised media narratives. Transitioning to real-world research integrity, the episode reviews a systematic meta-analysis quantifying the prevalence of misconduct and explores the pervasive culture of silence revealed by the stark discrepancy between those admitting to questionable research practices and those witnessing them in colleagues. We examine granular behaviours like hypothesising after results are known (HARKing) and salami publication before explaining the randomised response technique which is a mathematical method used in large-scale surveys to elicit honest answers about sensitive misconduct. The discussion also addresses qualitative findings that characterise academia as a 'bad barrel' where systemic 'publish or perish' pressures and an over-reliance on journal impact factors actively discourage the publication of valid negative results. Finally, we analyse a massive quantitative study of over forty-one million papers revealing a structural paradox where artificial intelligence tools accelerate individual careers and impact while simultaneously contracting the collective focus of science by automating established centres of knowledge rather than exploring unknown frontiers.

    Further details on the app and the workshop are available at https://pablobernabeu.github.io/applications-and-dashboards/unlock-the-lab

    References (in order of appearance)

    Bernabeu, P. (2026). Unlock the Lab: Your guide to reading science like a scientist (Version 1.0.0) [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.19153148

    Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and Engineering Ethics, 27(4), Article 41. https://doi.org/10.1007/s11948-021-00314-9

    Larsson, T., Plonsky, L., Sterling, S., Kytö, M., Yaw, K., & Wood, M. (2023). On the frequency, prevalence, and perceived severity of questionable research practices. Research Methods in Applied Linguistics, 2(3), Article 100064. https://doi.org/10.1016/j.rmal.2023.100064

    Gopalakrishna, G., ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. PLoS ONE, 17(2), Article e0263023. https://doi.org/10.1371/journal.pone.0263023

    Bruton, S. V., Medlin, M., Brown, M., & Sacco, D. F. (2020). Personal motivations and systemic incentives: Scientists on questionable research practices. Science and Engineering Ethics, 26(3), 1531–1547. https://doi.org/10.1007/s11948-020-00182-9

    Hao, Q., Xu, F., Li, Y., & Evans, J. (2026). Artificial intelligence tools expand scientists’ impact but contract science’s focus. Nature, 679, 1237–1243. https://doi.org/10.1038/s41586-025-09922-y

    Show More Show Less
    38 mins
  • Modality switch effects: The brain friction of switching senses
    Feb 13 2026

    🪄 Created using NotebookLM, with all the benefits and blind spots of human editing.

    This episode explores whether the human mind functions as an abstract symbol processor or a physical simulator deeply rooted in bodily experience. We delve into the 'modality switch effect', a phenomenon where shifting from one sensory modality to another, such as from sound to sight, incurs a measurable cognitive penalty. Foundational research initially suggested that people are consistently slower when verifying properties of concepts across different senses, suggesting the brain must physically reconfigure its neural resources to understand language. However, later studies proposed that our brains might be efficient rather than thorough, often relying on 'quick and fuzzy' linguistic shortcuts before booting up heavy sensory simulations. New evidence from event-related potential studies shows that this sensory activation occurs as early as 160 milliseconds after seeing a word, reinforcing the idea that grounding is a fundamental part of accessing meaning. We also discuss findings that demonstrate how even second languages, typically learned in abstract classroom settings, recruit the body's native sensory systems. Furthermore, the latest research indicates that these perceptual simulations are so automatic they activate even during 'shallow' tasks where participants are not explicitly trying to process word meaning. Finally, we consider what this means for a world increasingly dominated by flat screens and artificial intelligence, questioning if a lack of physical interaction might lead to a shallowing of human thought.

    References (in order of appearance)

    Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2003). Verifying different-modality properties for concepts produces switching costs. Psychological Science, 14(2), 119–124. https://doi.org/10.1111/1467-9280.t01-1-01429

    Louwerse, M., & Connell, L. (2011). A taste of words: Linguistic context and perceptual simulation predict the modality of words. Cognitive Science, 35(2), 381–398. https://doi.org/10.1111/j.1551-6709.2010.01157.x

    Collins, J., Pecher, D., Zeelenberg, R., & Coulson, S. (2011). Modality switching in a property verification task: An ERP study of what happens when candles flicker after high heels click. Frontiers in Psychology, 2, Article 10. https://doi.org/10.3389/fpsyg.2011.00010

    Hald, L. A., Marshall, J.-A., Janssen, D. P., & Garnham, A. (2011). Switching modalities in a sentence verification task: ERP evidence for embodied language processing. Frontiers in Psychology, 2, Article 45. https://doi.org/10.3389/fpsyg.2011.00045

    Bernabeu, P., Willems, R. M., & Louwerse, M. M. (2017). Modality switch effects emerge early and increase throughout conceptual processing: Evidence from ERPs. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 1629-1634). Austin, TX: Cognitive Science Society. https://doi.org/10.31234/osf.io/a5pcz

    Platonova, O., & Miklashevsky, A. (2025). Warm and fuzzy: Perceptual semantics can be activated even during shallow lexical processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 51(9), 1471–1496. https://dx.doi.org/10.1037/xlm0001429

    Wentura, D., Shi, E., & Degner, J. (2024). Examining modal and amodal language processing in proficient bilinguals: Evidence from the modality-switch paradigm. Frontiers in Human Neuroscience, 18, Article 1426093. https://doi.org/10.3389/fnhum.2024.1426093

    Show More Show Less
    30 mins
  • The dead salmon problem: Multiple tests, minimality and data-driven alternatives
    Jan 30 2026

    🪄 Created using NotebookLM, with all the benefits and blind spots of human editing.

    In 2009, a deceased Atlantic salmon was placed inside a functional magnetic resonance imaging scanner to test its calibration parameters. Although the subject was undeniably dead, the standard statistical software produced results suggesting the fish was actively contemplating human emotions. This bizarre outcome highlights a systemic fragility in modern science known as the multiple tests trap, where conducting thousands of tests without adjustment guarantees that random noise will eventually look like a discovery. Just as flipping a coin enough times will inevitably produce a streak of ten heads, asking too many questions of a large dataset ensures that a researcher will find significant results purely by luck.

    Escaping this trap requires rigorous pre-planning and methodological self-restraint to avoid the statistical cheating known as hypothesising after the results are known. While the classical Bonferroni correction acts as a 'sledgehammer' by dividing the significance threshold by the total number of tests, more sensitive sequential procedures like the Holm-Bonferroni method offer a more refined approach. Modern researchers often prefer sophisticated data-driven strategies such as permutation testing, which shuffles experimental labels thousands of times to build a custom noise map specific to the dataset rather than relying on broad theoretical assumptions.

    Choosing between the precise spatial localisation of maximum t-statistic testing and the sensitive yet fuzzy cluster-based methods reveals that statistical truth is often a philosophical judgement call. Ultimately, the decision of how to define a family of tests depends on the logical structure of a scientific claim and the intent of the investigator. By embracing the principle of test minimality, researchers can move beyond mere p-value adjustments and toward a more robust, transparent and honest scientific practice.

    References

    Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x

    Bennett, C. M., Miller, M. B., & Wolford, G. L. (2009). Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction. Neuroimage, 47(Suppl 1), S125. https://doi.org/10.1016/S1053-8119(09)71202-9

    Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7–29. https://doi.org/10.1177/0956797613504966

    Frane, A. V. (2021). Experiment-wise type I error control: a focus on 2× 2 designs. Advances in Methods and Practices in Psychological Science, 4(1), 2515245920985137. https://doi.org/10.1177/2515245920985137

    García-Pérez, M. A. (2023). Use and misuse of corrections for multiple testing. Methods in Psychology, 8, 100120. https://doi.org/10.1016/j.metip.2023.100120

    Groppe, D. M., Urbach, T. P., & Kutas, M. (2011). Mass univariate analysis of event‐related brain potentials/fields I: A critical tutorial review. Psychophysiology, 48(12), 1711-1725. http://doi.org/10.1111/j.1469-8986.2011.01273.x

    Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(2), 65–70. https://www.jstor.org/stable/4615733

    Rubin, M. (2021). When to adjust alpha during multiple testing: A consideration of disjunction, conjunction, and individual testing. Synthese, 199(3-4), 10969–11000. https://doi.org/10.1007/s11229-021-03276-4

    Show More Show Less
    41 mins
No reviews yet