In episode 118 of This Week in Quality, co-hosts Ben Dowen and Simon Tomes are joined by community members Gary Hawkes, Maithilee Chunduri and Richard Adams for the first live session of 2026. Recorded on Friday 9 January, the episode opens with New Year energy, MoT goals, badges and “fill up your MoT profile” prompts, plus a reminder about the MoT Ambassadors programme and all the ways people can get involved in events this year.
From there, the conversation quickly anchors on a powerful article about AI, testing and getting “back to basics.” The group explore over reliance on AI, shallow understanding and blind spots when tools drive the work instead of human analysis, collaboration and shared understanding. Simon and Ben keep returning to essentials like critical thinking, systems thinking, communication and risk focus, picking up key lines from the article such as “AI is most valuable once humans have already done the thinking” and “AI helps us move faster, but humans still decide where to run and why.”
Across the episode, the panel share real examples of using AI in practice. Ben talks through his Playwright work, using AI powered tooling to add data-test-ids, only to catch a subtle but important mistake later during testing. Richard describes using AI agents with Jira, root cause analysis and Confluence to surface risky areas and guide exploratory testing, highlighting how useful context makes AI genuinely helpful. Gary walks through how his team tried AI coding tools, what happened when the initial push was “faster and cheaper,” and how developers themselves became more cautious and selective over time. Maithilee shares how AI is now a core part of how she learns, stressing the need for clear goals, good prompts and not taking outputs at face value.
Threaded through it all are themes of accountability, risk appetite and the AI quality human loop. The group discuss exploratory testing supported by AI, where tools help with ideas, heuristics and note taking, but humans still own the charters, decisions and debriefs. They return several times to the idea that AI is a tool, not a solution for quality work, and that testers add value when they question, validate and refuse to outsource judgement. By the end of the hour, one message is clear. AI might run fast, but meaningful quality still depends on people who ask good questions, understand context and are willing to stay accountable for the outcomes.
#ThisWeekInQuality
#AIandTesting
#ExploratoryTesting
#HumanInTheLoop
#QualityEngineering