• #12 - Part 1 - 2025 Smol Der Year in Review with Jim Duffy
    Jan 8 2026

    Jim Duffy and Host James Wagenheim ring in the New Year with a two part series covering great moments in the Smol Der podcast in 2025. We AI use in eveything from developing apps, robotics, creative writing, cyber security, magic, education, and music.

    If you like this episode, please hit the like button or subscribe. Or if you want to support the podcast, you can find us on patreon.

    https://patreon.com/SmolDerPodcast?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_fan&utm_content=youtube

    00:01:38 Welcome + New Year setup (James)

    00:05:29 Clip roundup framing: what’s worth revisiting from 2025

    00:10:02 SEO, discovery, and why “generic AI content” won’t last

    00:12:10 Security lens: pentesting, “script kiddies,” and lowered barriers

    00:18:01 Jailbreaks + agent risk: prompt injection, MCP, and tool access

    00:21:49 Creators + copyright: training data, royalties, and looming precedent

    00:28:37 Taste as moat: “this vs that,” craft, and human judgment

    00:37:39 Wonder drills + kid-safe AI browsing (education meets product)

    00:52:29 AI in schools: empathy, relationships, and real learning incentives

    01:00:41 AI personalities & teens: attachment, manipulation risk, and guardrails

    LINKS AND REFERENCES

    Model Context Protocol (Anthropic announcement)

    https://www.anthropic.com/news/model-context-protocol

    Model Context Protocol specification (official site)

    https://modelcontextprotocol.io/specification/2025-11-25

    Pliny “L1B3RT4S” jailbreak prompt repository (GitHub)

    https://github.com/elder-plinius/L1B3RT4S

    OpenAI browser agents and prompt injection risk (CyberScoop)

    https://cyberscoop.com/openai-chatgpt-atlas-prompt-injection-browser-agent-security-update-head-of-preparedness/

    Google Gemini (official)

    https://gemini.google.com/

    Character.AI (official)

    https://character.ai/

    Hello Wonder — kid-safe AI companion and browser

    https://w.hellowonder.ai/archive/home-nov

    Global Online Academy — Lucas Ames author page

    https://globalonlineacademy.org/insights/authors/lucas-ames

    Penn & Teller: Fool Us (official CW page)

    https://www.cwtvpr.com/the-cw/shows/penn-teller-fool-us/about/

    The Rest Is History podcast (official site)

    https://therestishistory.com/

    Show More Show Less
    1 hr and 9 mins
  • #11 - AI Music, Human Taste, and the Future of SEO — with Digital Entrepreneur & Musician Will Mason
    Dec 29 2025
    In this episode of Smol Der – the professional’s applied AI podcast, host James Wagenheim sits down with Will Mason (digital entrepreneur and musician) for a practical conversation on what generative AI is actually changing and what it still cannot replace. Will argues that iteration is the root of creativity, explains why humans still win on taste and originality, and shares a hard-earned lesson: AI is great at novelty, but terrible at knowing when something is “finished.” From there, the discussion moves into the builder’s toolkit—vibe coding, state management, queuing, and the modern stack Will uses to ship products—plus the realities of security, outages, and shipping in imperfect systems. We also go into SEO in an AI-saturated internet: why “generic AI blog posts” won’t last, how to compete with higher-effort content and tools, and why Will believes most founders should start with content and audience-building, not paid ads. 00:00 Intro + the “iteration” thesis 02:20 From guitarist → marketer → entrepreneur 08:34 Why all music is iteration (and where AI differs) 14:17 AI isn’t “AI all the way down” + what humans still contribute 15:28 “AI is bad at understanding a finished product” 17:45 Will’s stack (Next.js, Vercel, Supabase, Modal) + CLI workflows 24:31 Vibe coding, security, and real-world risk 29:58 Music Made Pro + Change Lyric: products, demand, and automation 43:15 Copyright, moderation, and why restrictions drive tool migration 49:03 Patents, value, and why “novelty goes to zero” (theory) 1:01:18 SEO in an AI-first world: tools, long-tail, and differentiation 1:13:00 What’s next: orchestration and chaining existing libraries KEY TAKEAWAYS * Iteration—not inspiration—is still the real engine of great music; AI mainly accelerates loops of “try, listen, tweak” rather than replacing human direction. * Music Made Pro shows how AI can power high-touch, paid creative services (like lyric changes with original vocals) while still keeping a human in the loop for quality and nuance. * ChangeLyric is intentionally “rough but fast,” giving experienced producers a way to do bulk lyric swaps and then refine the results in their DAW. * Modern AI dev stacks (Next.js, Vercel, Supabase, Modal, Claude Code, etc.) let solo founders ship complex apps quickly—but they also raise new security, cost, and product-durability questions. * As tools like Sora and viral AI memes like Bird Game 3 spread, the big unresolved issue is who owns value and how individual creators get paid when models are trained on their work. LINKS & RESOURCES * Smol Der – The Professional’s Applied AI Podcast (show page) — https://www.stringtheoryaccelerator.com/podcast * String Theory Accelerator, Inc. (James’s AI consulting company) — https://www.stringtheoryaccelerator.com * Music Made Pro – Custom lyric change services — https://musicmadepro.com * ChangeLyric – Web app for bulk lyric swaps — https://www.changelyric.com REFERENCED MENTIONS (LOOKED UP) * Modal – Batch & serverless GPU platform (Modal) — https://modal.com/products/batch * Bird Game 3 explainer (Polygon) — https://www.polygon.com/is-bird-game-3-real-ai-pigeon-hummingbird-what-is-tiktok-gameplay/ * V0 – AI web UI builder (Vercel) — https://v0.dev/ * Claude Code (Anthropic) — https://www.anthropic.com/news/claude-code * Supabase – Open source Firebase alternative — https://supabase.com/ * Sora – OpenAI text-to-video model — https://openai.com/index/sora/ WATCH NEXT * Episode 09 – AI Agents, Failure, and Creative Risk with filmmaker Ian Pullens — https://youtu.be/KFL_TPtdhFc * Episode 10 – Penetration Testing and Fiction Writing in a World of LLMs with Cyber Security Expert and Author Alex Fox — https://youtu.be/C6KFldMqxe8 * Episode 08 – Bringing Wonder Into the World Through AI with Seth Raphael — https://youtu.be/KTnClhQdEDo If this conversation helped you think differently about AI music and creative work, hit subscribe and share the episode with someone who’s experimenting with these tools. Drop a comment with the AI music workflows or stacks you’re testing so James can feature more real-world examples in future episodes. #aimusic #generativeai #songwriting [generated in part using ChatGPT]
    Show More Show Less
    1 hr and 16 mins
  • #10 – Penetration Testing, Fiction Writing, and LLMs With Cybersecurity Expert and Author Alex Fox
    Dec 16 2025

    Penetration testing is changing fast—but not always in the ways the hype suggests. James Wagenheim sits down with fiction author and pentest lead Alex Fox to unpack modern pentesting, real-world escalation paths, and what LLMs mean for both attackers and defenders. We also discuss how LLMs are a poor substitute for human creativity.

    In this episode, James and Alex zoom in on what “penetration testing” really looks like in practice—from scoped engagements and vulnerability research (CVEs) to the messy human reality of misconfigurations, credentials, and internal privilege escalation. They discuss why attackers often choose the easiest path (and why that still works), then pivot into how generative AI changes the landscape: more volume, lower barriers, new failure modes—especially prompt injection when LLMs are connected to tools and workflows.

    You’ll also hear a pragmatic walkthrough of common internal assessment patterns (including attack-path mapping in Active Directory) and a candid conversation about Alex’s parallel work as a fiction writer—traditional publishing, querying agents, and where tools like Claude/ChatGPT help versus where they dilute craft.

    Topics include:

    Pentesting scope, CVEs, and “what’s actually exploitable”

    Prompt injection and why “more context” can increase risk

    AD attack paths: mapping, escalation, and defensive hygiene

    Writing and publishing in an era of LLMs

    References:

    “48 Hours Without AI” (NYT): https://www.nytimes.com/2025/10/28/style/48-hours-without-ai.html?smid=url-share — referenced as an example of cultural pushback / experimentation around AI.

    Mindscape #336 (Sean Carroll): https://www.youtube.com/watch?v=S31zEgHVkoA — referenced in the AI fundamentals/history discussion.

    CVE Program: https://www.cve.org/ — canonical vulnerability identifier referenced in pentest triage.

    OWASP GenAI LLM01 Prompt Injection: https://genai.owasp.org/llmrisk/llm01-prompt-injection/ — a practical framing aligned with the episode’s prompt-injection segment.

    BloodHound (SpecterOps): https://github.com/SpecterOps/BloodHound — referenced for AD attack-path mapping.

    SharpHound CE docs: https://bloodhound.specterops.io/collect-data/ce-collection/sharphound — the official data-collection guidance tied to BloodHound.

    MyChart: https://www.mychart.org/ — referenced as a real-world system where security posture matters.

    Please like or subscribe if you enjoyed this episode!

    Show More Show Less
    1 hr and 6 mins
  • #09 - Being a Creative and Using AI with Filmmaker and Art Director Ian Pullens
    Nov 25 2025

    Today’s guest, Art Director and Filmmaker Ian Pullens, who also happens to be my brother-in-law. It’s not nepotism that drove me to invite Ian today, it’s the fact that Ian is a hugely inspiring creative with a keen eye for quality and a sharp mind for process.

    Ian has produced some of the coolest and most thoughtful films and creative pieces for companies and deploys a wide range of skills to bring his vision to life and empower brands to deliver their message. He really knows his industry and I enjoyed hearing his thoughts on the state of AI in creative fields and learning what he thinks the future holds.

    If you like this conversation, please hit the like button, subscribe, or find us on Patreon.

    Show More Show Less
    1 hr and 18 mins
  • #08 - Bringing Wonder Into the World Through AI with Seth Raphael
    Nov 15 2025

    Inventor, magician, and former Google prototyper Seth Raphael joins James to explore how AI can spark real wonder—not slop. Seth cofounded Hello Wonder, an AI platform that reshapes the internet for kids in real time based on parent values and each child’s interests. After two and a half years building the product, Hello Wonder was acquired by Noggin (home to Blue’s Clues and Bill Nye the Science Guy). From redirecting sensitive queries into age-appropriate learning to designing agents that discover high-quality content on the fly, Seth shares how to make technology safe, joyful, and curiosity-driven for families.

    What we cover:

    Hello Wonder - An AI agent that learns each child, respects parent preferences, and re-routes “blocked” topics into constructive explorations (e.g., from human reproduction to plant germination).

    Safety in the open web: Real-time retrieval and classification for accuracy, alignment with parent values, and match to a child’s academic goals—while keeping the experience playful.

    Designing for wonder: Seth’s MIT thesis roots; why breaking expectations creates dopamine that fuels deeper learning (not doom-scrolling). “Wonder drills” to train attention to awe in daily life.

    From magician to engineer: Using magical thinking to question assumptions, prototype “the impossible,” and then engineer it—first principles for product teams.

    Five-years-ahead building: Time-machine prototyping, fits & starts beyond neat Moore’s-Law curves, and why early edges on cost/quality can flip quickly for AI products.

    “Quantum UX”: Parallel AI agents exploring many options at once (e.g., trips with multiple itineraries), then collapsing into one choice—plus the transactional infrastructure this future will need.

    Personalized software on demand: The inverse of mass production—software generated for one person’s needs in minutes. Examples: spinning up micro-apps, a hardware side project (“Tiny Tarot”) from idea to storefront, and a lightweight time-tracking app (“Greedy Badger”).

    Human connection first: Avoiding parasocial traps; using AI to convene people in the real world (e.g., an interactive festival poster that curates sets, previews music, and pairs attendees).

    Character & pedagogy: A gentle guide character (an axolotl) that grows with kids; “pause to think” instead of lockouts; turning a kid’s interests (even a MrBeast video) into fraction lessons.

    Parenting in practice: Give yourself grace—controls break, kids outsmart us. Model phone etiquette, build trust, and use safe walled options for younger kids while coaching judgment as they grow.

    Schools & startups: Early pilots showed promise, but K-12 buying cycles are slow; hard trade-offs for founders. Candid lessons on when to raise VC, when to bootstrap, and how to define success on your own terms.

    Where to start with AI: Pick a tool, tell it who you are and what you care about, and ask what it can do for your life. Start small; iterate.

    Why watch:

    If you’re a parent, educator, builder, or founder, this episode offers concrete patterns for safe, curiosity-led AI, actionable product tactics, and a blueprint for creating tech that nudges people back into the real world.

    Key topics:

    AI for kids, online safety, AI tutors, retrieval + classification, product prototyping, Agentic workflows, parallel planning (“Quantum UX”), humane UX, K-12 pilots, startup/VC trade-offs, parenting with tech, curiosity and learning.

    Important Disclaimer:

    This episode includes discussion of suicide and related mental health topics. Viewer discretion is advised. If you or someone you know is struggling or thinking about self-harm, please reach out for help immediately.

    In the United States:

    Call 988 to reach the Suicide & Crisis Lifeline, or dial 911 in an emergency.

    In Canada:

    Call 1-833-456-4566 or text 45645.

    If you are outside the U.S. or Canada:

    Visit https://www.opentohope.com/suicide-hotlines/ or your local health authority for crisis numbers in your region.

    You are not alone, and help is available right now.

    Hashtags:

    #AIForKids #EdTech #Parenting #ProductDesign #AgenticAI #QuantumUX #StartupLessons #OnlineSafety #SmolDerPodcast #SethRaphael #HelloWonder #Noggin

    Show More Show Less
    Less than 1 minute
  • #07 - Educating with AI: Lucas Ames (GOA) on Curiosity, Cheating, and What Schools Must Do Next
    Nov 7 2025

    How does AI transform learning without killing curiosity? In this episode, I sit down with Lucas Ames of Global Online Academy (GOA) to unpack what actually works—and what doesn’t—when bringing AI into K-12 classrooms.

    *Core themes*

    Curiosity first: Tech should increase student curiosity. If tools switch off active thinking, you’ve gone too far.

    From rote to higher-order: Like calculators, AI shifts what we assess—less memorization, more evidence, analysis, and synthesis.

    Cheating isn’t the real problem: The goal isn’t “gotcha” detection; it’s ensuring students still learn the skills that matter.

    Detectors and watermarking: Current AI detectors are unreliable; watermarking text isn’t a silver bullet.

    Policies that breathe: One size won’t fit all. Department-level “green / yellow / red” guidance beats blanket bans. GOA de-identifies student data before AI use and avoids over-policing while the field evolves.

    AP vs. IB signals: Early AI stances differed; the deeper story is how content-heavy vs. application-heavy models adapt.

    Teachers need empathy—and time: Start AI as a teacher’s “intern” (draft emails, map curriculum, find gaps) to free time and improve practice.

    Personalized tutoring that works: Tools like Khanmigo can level access when paired with a human teacher who sets the course and culture.

    Wellness with care: Use early signals (attendance, notes, behavior) to start human conversations, not trigger automated punishment.

    Micro-schools and iteration: Expect faster learning from smaller, experimental models that can adapt quickly.

    Humans are the differentiator: Relationships—teacher↔student and student↔student—predict good experiences more than any tool. Teachers who adopt AI thoughtfully will outpace those who don’t.

    Why watch:

    If you’re a teacher, school leader, parent, or edtech builder, this conversation gives practical, classroom-level ways to use AI that protect curiosity, reduce busywork, and keep the human center intact.

    Key topics:

    AI in education, personalized learning, cheating and assessment, standardized testing, policy design, teacher workload, AI tutors, student wellness signals, micro-schools, GOA practices.

    Hashtags:

    #AIinEducation #EdTech #PersonalizedLearning #K12 #GlobalOnlineAcademy #llm #chatgpt

    Show More Show Less
    1 hr and 8 mins
  • #06 - Programming and Philosophy, Technofeudalism w/ Adam Hanson and Jim Duffy Computer Science Pros
    Nov 3 2025

    When setting out to produce Smol Der, my goal was to create at least 12 episodes in my first gambit and this is the halfway mark and so today I have a special treat for everyone listening because I’ve asked Jim Duffy from Episode 1 to come back as my co-host.

    Jim as you may know is a delightful and intelligent developer with Hubspot and a personal friend and he and I talk with Adam Hanson, an automation expert with a background in philosophy and computer science.

    In today’s episode Adam shares keen insight into how we should be responsibly deploying our technology and arranging society in a way that reduces exploitation and increases prosperity.

    We also explore practical applications of AI and get Jim and Adam’s reaction to recent announcements by Open AI and researchers.

    And if you like this conversation, please hit the like button, subscribe, or find us on Patreon.

    Show More Show Less
    1 hr and 20 mins
  • #05 - AI in High Precision Manufacturing with John Draper of Hexagon
    Oct 25 2025

    Today’s guest, John Draper a CAM Automation Software Developer With Hexagon.

    He is a talented, Machinist with deep experience in Computational Geometry and Precision Manufacturing and a close personal friend of mine.

    Automation has been a cornerstone of manufacturing since the industrial revolution and in today’s discussion John and I will explore how AI, the successor to early robotics and automation, is showing up today and what the future might hold.

    And if you like this conversation, please hit the like button, subscribe, or find us on Patreon.

    Show More Show Less
    1 hr