The TELSIG Podcast cover art

The TELSIG Podcast

The TELSIG Podcast

Written by: Phil Martin
Listen for free

About this listen

Does technology help or hinder learning? How can we make better use of digital tools in teaching? Phil Martin from the University of York dives into the neon-lit underworld of technology enhanced learning through conversations with experts in teaching and learning design. Each episode looks at how educators can stay current with their use of learning tech in this ever-changing landscape.Copyright 2024 All rights reserved.
Episodes
  • The festive roundtable update of fun. With James Lamont and Deanne Cobb-Zygadlo
    Dec 18 2025

    Deanne, James and I gather around a virtual Yuletide fireplace, roast chestnuts and perform that time-honoured festive tradition of chewing over key moments in learning tech and EAP from the year gone by. Much as the shepherds probably did.

    Is a full in-class digital detox a good idea, and is this a weird thing to suggest in a technology enhanced learning podcast? Did we ever figure out whether students real-time subtitling us is a problem? Would any of us pay for AI-generated music? Did we get carried away with flipped learning after COVID?

    As we look back on the debates that have lit up 2025, we'd like to wish all our listeners an awesome holiday and a happy new year.

    Further reading

    Listen to Klaus Mundt and Michael Groves on TELSIG

    Eaton, S. E. (2025). Global Trends in Education: Artificial Intelligence, Postplagiarism, and Future‑focused Learning for 2025 and Beyond – 2024–2025 Werklund Distinguished Research Lecture. International Journal for Educational Integrity, 21(12). https://link.springer.com/content/pdf/10.1007/s40979-025-00187-6.pdf

    Flenady, G., & Sparrow, R. (2025). Cut the bullshit: why GenAI systems are neither collaborators nor tutors. Teaching in Higher Education, 1–10. https://doi.org/10.1080/13562517.2025.2497263

    Kirschner, P., (2025), When phones go out the window, learning comes in the door. Krischnered. Available at: http://www.kirschnered.nl/2025/11/01/when-phones-go-out-the-window-learning-comes-in-the-door/

    Plate, D., & Hutson, J. (2025). The intellectual bankruptcy of anti-AI academic alarmism: a rebuttal. Teaching in Higher Education, 1–12. https://doi.org/10.1080/13562517.2025.2562594

    Timecodes

    00:00 Intro to the guests 02:41 James’ new paper on student use of translation 10:24 The case for digital detox 14:03 Pedagogy leads 16:41 Phil’s phones away experiment 19:55 Has flipped learning failed? 26:03 Do students still need English? 29:31 Do unsupervised assessments provide evidence of learning? 34:50 The AI bullshit paper 38:04 Plug for the TELSIG symposium 39:54 Would you pay for AI music? 46:47 Reverting to what makes for good learning 51:35 TELSIG’s Christmas message

    Guest bios

    James Lamont is an Associate Lecturer in Skills Development, Department of Education, University of York in the United Kingdom. His research interests include the effects of generative AI on student thought processes and outputs, and how universities can adapt to this new environment.

    Deanne Cobb-Zygadlo has been an EAP tutor at Nazarbayev University since 2015. She is the co-coordinator of the Technology-Enhanced Learning Special Interest Group (TELSIG) with BALEAP, which is the accreditation organization for the NU Foundation Year Program. She is also a member of the ENAI (European Network for Academic Integrity) Policies Working Group.

    Show More Show Less
    52 mins
  • The AI Assessment Scale reloaded. With Mike Perkins
    Nov 25 2025
    I’m joined today by Mike Perkins to talk about the AI Assessment Scale, following the publication of the latest version of the scale that appeared in the Journal of University Teaching and Learning Practice in September. The AI Assessment Scale has been used by more than 350 institutions globally, has been translated into 30 languages, and is recognised by regulators such as TEQSA (Tertiary Education Quality and Standards Agency) in Australia. Mike Perkins and co-authors Jasper Roe, Leon Furze and Jason MacVaugh have been recognised as guiding lights for educators around the world responding to the widespread availability of Gen AI tools. Mike and I talk about how the team’s thinking has changed on some of the topics related to AI and assessment, their responses to some of the critiques of the original scale, comparisons with other models of AI integration, the international response to the AIAS, and other topics. References Perkins, M., Roe, J., & Furze, L. (2025). Reimagining the Artificial Intelligence Assessment Scale (AIAS): A refined framework for educational assessment. Journal of University Teaching and Learning Practice, 22(7). https://doi.org/10.53761/rrm4y757 Perkins, M., Roe, J., & Furze, L. (2025). How (not) to use the AI Assessment Scale. Journal of Applied Learning and Teaching, 8(2). https://doi.org/10.37074/jalt.2025.8.2.15 Guest bio Assoc. Prof. Dr. Mike Perkins serves as Head of the Centre for Research & Innovation at British University Vietnam (BUV). With a PhD in Management from the University of York, his research journey has evolved from studying performance management in local policing to becoming a leading voice in the integration of Generative AI (GenAI) in higher education. Dr. Perkins is renowned for developing the AI Assessment Scale (AIAS), translated into 30 languages and implemented across more than 350 schools and universities worldwide. His work addresses the critical intersection of technology, academic integrity, and ethical implementation of AI in educational settings. He leads research in the equitable application of GenAI, and provides guidance to educators and policymakers responding to the challenges of the new GenAI landscape. Dr. Perkins' expertise has established him as a sought-after advisor to educational institutions globally, supporting them in ethically integrating Generative AI to enhance student learning while preserving academic integrity. Beyond his work with AI, Dr. Perkins has conducted significant research on broader academic integrity issues, including investigations into diploma mills and student behavior during the COVID-19 pandemic's shift to online learning. His expertise spans performance management, academic integrity, and the strategic integration of emerging technologies in educational settings. Check out the AI Assessment Scale website for the most up to date information and resources on https://aiassessmentscale.com/ Follow Mike on Linkedin: https://www.linkedin.com/in/mgperkins/ Further reading Corbin, T., Dawson, P. and Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education, 1–11. Available at: https://doi.org/10.1080/02602938.2025.2503964 Newton, P. M. and Draper, M. J. (2025) ‘Widespread use of summative online unsupervised remote (SOUR) examinations in UK higher education: ethical and quality assurance implications’, Quality in Higher Education, 31(1), pp. 127–141. doi: https://doi.org/10.1080/13538322.2025.2521174 Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(06), Article 06. https://doi.org/10.53761/q3azde36 Timecodes 00:00 Introduction 03:42 Mike’s background in AI and assessment 06:24 Links to EAP 08:12 Differences in the Australian and UK post COVID responses to assessment 12:03 How the thinking behind the new AIAS has changed 15:20 What are we learning with gen AI? 17:44 Examples of AI in teaching and assessment 21:00 Assessment for and of learning 26:57 AIAS and the two-lane approach 29:57 Discursive versus structural changes 36:00 Should training be mandatory? 38:52 Future directions 44:48 What makes a successful writing team?
    Show More Show Less
    48 mins
  • How Gen AI is disrupting academic publishing. With Samantha Curle
    Oct 21 2025

    Today I’m talking to Samantha Curle from the University of Bath about her recent article, Generative AI and the future of writing for publication: insights from applied linguistics journal editors.

    The peer review process is under increasing strain. With the explosion of submissions to academic journals since ChatGPT became available to all, editorial boards are struggling to keep pace. Peer reviewers are in short supply, and this has prompted (pardon the pun) an increased use of AI in the review process itself, leading to concerns that some articles may be making it to print without having been subjected to the appropriate level of scrutiny.

    Samantha and I dig into the data from her study of journal editors and discuss the cracks that are appearing in the system. We also talk about pressure to publish, questionable research practices, the replication crisis, opaque data sets, the future of publishing and more. Samantha also offers advice to teacher researchers looking to publish, and her plans for future projects.

    Guest bio Samantha Curle is a Reader in Applied Linguistics at the University of Bath & Adjunct Professor at Khazar University, Azerbaijan. She is Co-founder of the Cambridge ReachSci Mini-PhD on Multilingual Education & a Fellow of the Higher Education Academy & the Royal Society of Arts. She read for her DPhil in Education (Applied Linguistics) at the University of Oxford, having previously read for two MSc degrees there. Her research focuses on English Medium Instruction (EMI) in higher education, examining factors that influence academic achievement, such as English proficiency & psychological constructs. Her research spans across four continents (Africa, Asia, Europe, South America) and she has published in journals such as Language Teaching & Journal of Engineering Education.

    References Moorhouse, B., Consoli, S. and Curle, S. (2025). Generative AI and the future of writing for publication: insights from applied linguistics journal editors. Applied Linguistics Review. https://doi.org/10.1515/applirev-2025-0021

    Samantha’s Research Gate profile https://www.researchgate.net/profile/Samantha-Curle Follow Samantha on Linkedin https://www.linkedin.com/in/samanthacurle/

    Further reading Hinz, A. (2025). Navigating Generative AI in Academic Publishing: An Interview With Benjamin Luke Moorhouse. De Gruyter Conversations. Available at: https://blog.degruyter.com/navigating-generative-ai-in-academic-publishing-an-interview-with-benjamin-luke-moorhouse/

    Gibney, E. (2025) Scientists hide messages in papers to game AI peer review. Nature. Available at:doi: https://doi.org/10.1038/d41586-025-02172-y

    Kurzgesagt - In a nutshell. (2025). AI Slop is destroying the internet. [Video]. Available at: https://www.youtube.com/watch?v=_zfN9wnPvU0 [Accessed 16th October 2025].

    Simons, J. (2024) Harvard’s Gino Report Reveals How A Dataset Was Altered. Data Colada. Available at: https://datacolada.org/118 [Accessed 11th August 2025]

    Timecodes 00:00 Introduction 01:49 Samantha Curle 06:22 The spike in submissions 11:05 Why the peer review process was already struggling 13:09 AI generated reviews 15:50 The importance of rigorous peer review 24:31 Rethinking the process 29:03 Questionable research practices 34:05 What has changed in the wake of the replication crisis? 35:34 The difficulty of accessing data sets 40:35 Who can instigate change? 44:07 Advice for teachers looking to publish 48:39 Samantha’s future projects

    Show More Show Less
    51 mins
No reviews yet