Smol Der - The Professional's Applied AI Podcast cover art

Smol Der - The Professional's Applied AI Podcast

Smol Der - The Professional's Applied AI Podcast

Written by: James Wagenheim
Listen for free

About this listen

Smol Der explores how AI is burning through work—leaving just enough embers to build something better. Practical chats with pros on AI unlocks, stacks, promp...Copyright 2025 All rights reserved. Politics & Government
Episodes
  • #12 - Part 1 - 2025 Smol Der Year in Review with Jim Duffy
    Jan 8 2026

    Jim Duffy and Host James Wagenheim ring in the New Year with a two part series covering great moments in the Smol Der podcast in 2025. We AI use in eveything from developing apps, robotics, creative writing, cyber security, magic, education, and music.

    If you like this episode, please hit the like button or subscribe. Or if you want to support the podcast, you can find us on patreon.

    https://patreon.com/SmolDerPodcast?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_fan&utm_content=youtube

    00:01:38 Welcome + New Year setup (James)

    00:05:29 Clip roundup framing: what’s worth revisiting from 2025

    00:10:02 SEO, discovery, and why “generic AI content” won’t last

    00:12:10 Security lens: pentesting, “script kiddies,” and lowered barriers

    00:18:01 Jailbreaks + agent risk: prompt injection, MCP, and tool access

    00:21:49 Creators + copyright: training data, royalties, and looming precedent

    00:28:37 Taste as moat: “this vs that,” craft, and human judgment

    00:37:39 Wonder drills + kid-safe AI browsing (education meets product)

    00:52:29 AI in schools: empathy, relationships, and real learning incentives

    01:00:41 AI personalities & teens: attachment, manipulation risk, and guardrails

    LINKS AND REFERENCES

    Model Context Protocol (Anthropic announcement)

    https://www.anthropic.com/news/model-context-protocol

    Model Context Protocol specification (official site)

    https://modelcontextprotocol.io/specification/2025-11-25

    Pliny “L1B3RT4S” jailbreak prompt repository (GitHub)

    https://github.com/elder-plinius/L1B3RT4S

    OpenAI browser agents and prompt injection risk (CyberScoop)

    https://cyberscoop.com/openai-chatgpt-atlas-prompt-injection-browser-agent-security-update-head-of-preparedness/

    Google Gemini (official)

    https://gemini.google.com/

    Character.AI (official)

    https://character.ai/

    Hello Wonder — kid-safe AI companion and browser

    https://w.hellowonder.ai/archive/home-nov

    Global Online Academy — Lucas Ames author page

    https://globalonlineacademy.org/insights/authors/lucas-ames

    Penn & Teller: Fool Us (official CW page)

    https://www.cwtvpr.com/the-cw/shows/penn-teller-fool-us/about/

    The Rest Is History podcast (official site)

    https://therestishistory.com/

    Show More Show Less
    1 hr and 9 mins
  • #11 - AI Music, Human Taste, and the Future of SEO — with Digital Entrepreneur & Musician Will Mason
    Dec 29 2025
    In this episode of Smol Der – the professional’s applied AI podcast, host James Wagenheim sits down with Will Mason (digital entrepreneur and musician) for a practical conversation on what generative AI is actually changing and what it still cannot replace. Will argues that iteration is the root of creativity, explains why humans still win on taste and originality, and shares a hard-earned lesson: AI is great at novelty, but terrible at knowing when something is “finished.” From there, the discussion moves into the builder’s toolkit—vibe coding, state management, queuing, and the modern stack Will uses to ship products—plus the realities of security, outages, and shipping in imperfect systems. We also go into SEO in an AI-saturated internet: why “generic AI blog posts” won’t last, how to compete with higher-effort content and tools, and why Will believes most founders should start with content and audience-building, not paid ads. 00:00 Intro + the “iteration” thesis 02:20 From guitarist → marketer → entrepreneur 08:34 Why all music is iteration (and where AI differs) 14:17 AI isn’t “AI all the way down” + what humans still contribute 15:28 “AI is bad at understanding a finished product” 17:45 Will’s stack (Next.js, Vercel, Supabase, Modal) + CLI workflows 24:31 Vibe coding, security, and real-world risk 29:58 Music Made Pro + Change Lyric: products, demand, and automation 43:15 Copyright, moderation, and why restrictions drive tool migration 49:03 Patents, value, and why “novelty goes to zero” (theory) 1:01:18 SEO in an AI-first world: tools, long-tail, and differentiation 1:13:00 What’s next: orchestration and chaining existing libraries KEY TAKEAWAYS * Iteration—not inspiration—is still the real engine of great music; AI mainly accelerates loops of “try, listen, tweak” rather than replacing human direction. * Music Made Pro shows how AI can power high-touch, paid creative services (like lyric changes with original vocals) while still keeping a human in the loop for quality and nuance. * ChangeLyric is intentionally “rough but fast,” giving experienced producers a way to do bulk lyric swaps and then refine the results in their DAW. * Modern AI dev stacks (Next.js, Vercel, Supabase, Modal, Claude Code, etc.) let solo founders ship complex apps quickly—but they also raise new security, cost, and product-durability questions. * As tools like Sora and viral AI memes like Bird Game 3 spread, the big unresolved issue is who owns value and how individual creators get paid when models are trained on their work. LINKS & RESOURCES * Smol Der – The Professional’s Applied AI Podcast (show page) — https://www.stringtheoryaccelerator.com/podcast * String Theory Accelerator, Inc. (James’s AI consulting company) — https://www.stringtheoryaccelerator.com * Music Made Pro – Custom lyric change services — https://musicmadepro.com * ChangeLyric – Web app for bulk lyric swaps — https://www.changelyric.com REFERENCED MENTIONS (LOOKED UP) * Modal – Batch & serverless GPU platform (Modal) — https://modal.com/products/batch * Bird Game 3 explainer (Polygon) — https://www.polygon.com/is-bird-game-3-real-ai-pigeon-hummingbird-what-is-tiktok-gameplay/ * V0 – AI web UI builder (Vercel) — https://v0.dev/ * Claude Code (Anthropic) — https://www.anthropic.com/news/claude-code * Supabase – Open source Firebase alternative — https://supabase.com/ * Sora – OpenAI text-to-video model — https://openai.com/index/sora/ WATCH NEXT * Episode 09 – AI Agents, Failure, and Creative Risk with filmmaker Ian Pullens — https://youtu.be/KFL_TPtdhFc * Episode 10 – Penetration Testing and Fiction Writing in a World of LLMs with Cyber Security Expert and Author Alex Fox — https://youtu.be/C6KFldMqxe8 * Episode 08 – Bringing Wonder Into the World Through AI with Seth Raphael — https://youtu.be/KTnClhQdEDo If this conversation helped you think differently about AI music and creative work, hit subscribe and share the episode with someone who’s experimenting with these tools. Drop a comment with the AI music workflows or stacks you’re testing so James can feature more real-world examples in future episodes. #aimusic #generativeai #songwriting [generated in part using ChatGPT]
    Show More Show Less
    1 hr and 16 mins
  • #10 – Penetration Testing, Fiction Writing, and LLMs With Cybersecurity Expert and Author Alex Fox
    Dec 16 2025

    Penetration testing is changing fast—but not always in the ways the hype suggests. James Wagenheim sits down with fiction author and pentest lead Alex Fox to unpack modern pentesting, real-world escalation paths, and what LLMs mean for both attackers and defenders. We also discuss how LLMs are a poor substitute for human creativity.

    In this episode, James and Alex zoom in on what “penetration testing” really looks like in practice—from scoped engagements and vulnerability research (CVEs) to the messy human reality of misconfigurations, credentials, and internal privilege escalation. They discuss why attackers often choose the easiest path (and why that still works), then pivot into how generative AI changes the landscape: more volume, lower barriers, new failure modes—especially prompt injection when LLMs are connected to tools and workflows.

    You’ll also hear a pragmatic walkthrough of common internal assessment patterns (including attack-path mapping in Active Directory) and a candid conversation about Alex’s parallel work as a fiction writer—traditional publishing, querying agents, and where tools like Claude/ChatGPT help versus where they dilute craft.

    Topics include:

    Pentesting scope, CVEs, and “what’s actually exploitable”

    Prompt injection and why “more context” can increase risk

    AD attack paths: mapping, escalation, and defensive hygiene

    Writing and publishing in an era of LLMs

    References:

    “48 Hours Without AI” (NYT): https://www.nytimes.com/2025/10/28/style/48-hours-without-ai.html?smid=url-share — referenced as an example of cultural pushback / experimentation around AI.

    Mindscape #336 (Sean Carroll): https://www.youtube.com/watch?v=S31zEgHVkoA — referenced in the AI fundamentals/history discussion.

    CVE Program: https://www.cve.org/ — canonical vulnerability identifier referenced in pentest triage.

    OWASP GenAI LLM01 Prompt Injection: https://genai.owasp.org/llmrisk/llm01-prompt-injection/ — a practical framing aligned with the episode’s prompt-injection segment.

    BloodHound (SpecterOps): https://github.com/SpecterOps/BloodHound — referenced for AD attack-path mapping.

    SharpHound CE docs: https://bloodhound.specterops.io/collect-data/ce-collection/sharphound — the official data-collection guidance tied to BloodHound.

    MyChart: https://www.mychart.org/ — referenced as a real-world system where security posture matters.

    Please like or subscribe if you enjoyed this episode!

    Show More Show Less
    1 hr and 6 mins
No reviews yet