Episodes

  • How Attackers Use AI And Why Your Defenses Might Still Fail with Adriel Desautels
    Feb 22 2026

    Episode # 183

    Today's Guest: Adriel Desautels, Founder & CEO, Netragard

    Adriel is a leader in cybersecurity with over 20 years of experience. Adriel founded Secure Network Operations and the SNOsoft Research Team, whose vulnerability research helped shape modern responsible disclosure practices. He later launched Netragard, pioneering Realistic Threat Penetration Testing, which he now call Red Teaming, and expanding into a broad range of security services.

    • Website: Netregard
    • X/Twitter: Netregard

    What Listeners Will Learn:

    • Why "AI penetration testing" is often closer to automated scanning than real offensive testing
    • How AI changes security risk mainly through volume and speed, not necessarily sophistication
    • Where organizations get misled into a false sense of security
    • Why "preventing breach" is unrealistic and why limiting damage paths matters more
    • What cybersecurity professionals should focus on to stay relevant in the LLM era
    • How AI may influence vulnerability research, but still struggles with novel exploitation thinking

    Resources:
    • Netregard
    Show More Show Less
    25 mins
  • Why 95% of AI Pilots Fail and How to Be in the 5% with Mindaugas Maciulis
    Feb 7 2026

    Welcome to Open Tech Talks.

    Quick note before we start, thank you.

    The messages, the feedback, the "keep this practical" reminders… they've been incredibly helpful. Open Tech Talks has always been a weekly sandbox for technology insights, experimentation, and inspiration—with one objective: learn, test, and share what's real.

    Now, a personal moment from this week.

    A few days ago, I sat with a business owner who said something that stuck with me:

    "AI is everywhere… but I don't know where to start without breaking my business."

    And that's the truth for most companies, especially small businesses.

    Because "start with AI" sounds simple… until it touches real operations:

    • leads that go cold,

    • follow-ups that don't happen,

    • teams that feel overwhelmed,

    • tools that multiply,

    • processes that nobody can explain clearly.

    Most AI projects don't fail because the model is weak.

    They fail because the process is unclear, the team is overloaded, and the strategy is missing.

    Let's begin.

    Episode # 182

    Today's Guest: Mindaugas (Min) Maciulis, Founder & CEO of Strategic AI Advisors

    He works with CEOs, COOs, and operating partners in the $20M–$250M range who are ready to go beyond pilots and turn AI into real EBITDA growth. His proven 90-day sprint framework, AImpact OS, delivers measurable lifts across productivity, customer service, and sales.

    • Website: Strategic Advisors

    What Listeners Will Learn:

    • Identify the best "starting point" for AI using business pain, not hype
    • Understand why AI pilots fail mostly due to adoption (not technology)
    • Learn a practical approach to simplify workflows before adding automation
    • See how SMBs can move faster than enterprises in the AI era
    • Understand the difference between augmentation and transformation with AI
    • Learn how to avoid tool overload and focus on measurable outcomes
    Resources:
    • Strategic Advisors
    Show More Show Less
    30 mins
  • AI Is Creating Technical Debt Faster Than You Think with Maxim Silaev
    Jan 30 2026

    This week, I've been thinking about something slightly uncomfortable.

    Last weekend, I was reviewing one of my older architecture diagrams from five years ago. A cloud-native migration plan I was deeply proud of at the time. It was clean. Structured. Scalable.

    And then I asked myself:

    If I were to rebuild this today in the era of generative AI…

    Would I build it the same way?

    The honest answer?

    No.

    Not because it was wrong.

    But because our assumptions have changed.

    Two years ago, AI was a feature.

    Today, AI is shaping architecture decisions.

    We're not just designing systems anymore.

    We're designing systems that design, generate, predict, and automate.

    And here's the tension I keep seeing in enterprise conversations:

    Everyone wants AI.

    But very few are asking:

    "What technical debt are we creating while chasing it?"

    That's why today's conversation matters.

    Today, I'm joined by Maxim Salav, based in Australia, someone who works deeply in enterprise architecture and technical debt remediation.

    And this episode is not about hype.

    It's about responsibility.

    Because AI doesn't remove architectural complexity.

    In many cases, it amplifies it.

    Let's get into it.

    Chapters

    00:00 Introduction to Technical Debt and Architecture
    01:34 The Impact of AI on Technical Debt
    04:12 Generative AI and Architectural Challenges
    08:40 Adopting AI in Organizations
    12:26 Building AI Strategies and Governance
    17:33 Data Quality and AI Integration
    22:43 Guardrails for AI Adoption

    Episode # 181

    Today's Guest: Maxim Silaev, Technology Advisor and Enterprise Architect

    He is a technology advisor and enterprise architect with more than two decades of experience working with high-growth companies, complex systems, and business-critical platforms.

    • Website: Arch-Experts

    What Listeners Will Learn:

    • What technical debt really means in the AI era
    • How generative AI can unintentionally increase hidden system risk
    • Why architecture remains critical despite AI coding tools
    • The importance of governance and verification layers in AI systems
    • How large enterprises are cautiously integrating AI
    • Why strategy must precede AI deployment
    • The evolving role of enterprise architects in AI-native environments
    Resources:
    • Arch-Experts
    Show More Show Less
    33 mins
  • Simplify Your Tech Stack and Scale Faster with Kara Williams
    Jan 25 2026

    Chapters

    00:00 Introduction to Kara Williams
    01:53 Kara's Coaching Journey and Entrepreneurial Background
    03:20 The Importance of a Simplified Tech Stack
    05:51 Common Mistakes in Tech Selection
    07:09 Exploring AI in Business
    08:16 Creating the Proof First GPT
    10:47 Learning and Executing with AI
    12:04 Common Challenges Faced by Entrepreneurs
    13:50 Guiding New Entrepreneurs
    14:59 Misconceptions About Low Ticket Offers
    16:18 Refining Messaging and Offers
    17:29 The Role of Automation in Business
    18:34 Understanding Automation Needs
    19:36 Testing Freebies and Building Relationships
    20:29 Lessons Learned in Business
    21:20 Future Plans and Refinements
    22:31 Final Tips for Entrepreneurs

    Episode # 180

    Today's Guest: Kara Williams, Founder, GHL Mastery Academy

    She is the founder of GHL Mastery Academy, where she helps CEOs stop being the bottleneck in their business by turning their VA, OBM, or EA into a trained backend powerhouse.

    • Website: Kara Williams
    • Youtube: GHL Mastery Academy

    What Listeners Will Learn:

    • Why "cheap tool stacking" quietly becomes expensive (money + time + broken trust)
    • How to think about systems like a real business owner (not a hobbyist)
    • Why reliability matters more than feature-count in early-stage tech stacks
    • How entrepreneurs can use AI to validate offers before building full courses or funnels
    • What automation is actually for: visibility, testing, and removing blind spots
    • How to simplify business operations without losing flexibility or creativity
    Resources:
    • Website: Kara Williams
    Show More Show Less
    24 mins
  • Building Startups in the AI Era Lessons from 30 Years of Venture Capital with Scott Kelly
    Jan 18 2026

    Welcome back to Open Tech Talks, and thank you, genuinely, for the continued support, messages, and thoughtful feedback. This show has been running for years now, and what keeps it meaningful is the shared curiosity of this community.

    We're in a very different phase of the AI journey.

    The conversation has clearly moved past "Can we build this?"

    Now it's about "Should we build this?", "Is this sustainable?", and "Does this actually create value?"

    Over the last year, I've personally noticed something interesting while working with enterprises, founders, and investors: AI has lowered the cost of building but raised the cost of judgment.

    It's easier than ever to create products, prototypes, and even companies. But deciding what's worth building, when to raise capital, and how to scale responsibly has become harder, not easier.

    That's why today's conversation matters.

    This episode is not about chasing trends or predicting the next AI unicorn.

    It's about long-term thinking, founder discipline, and understanding capital, timing, and execution in an AI-driven world.

    Today's guest has spent decades working across venture capital, startup growth, and exits through multiple technology cycles and brings a grounded perspective that's especially valuable right now.

    Let's welcome Scott Kelly to Open Tech Talks.

    Chapters

    00:00 Introduction to Scott Kelly and His Ventures
    02:00 The Transformative Impact of AI
    04:03 Successful Investments and Entrepreneurial Journeys
    05:53 Lessons for Entrepreneurs and Pitching Tips
    10:06 Navigating the AI Landscape in Startups
    11:52 Industry Applications of AI
    14:54 Pitch Events and Investor Engagement
    17:03 Investor Perspectives on New Technologies
    19:52 Advice for Aspiring Entrepreneurs

    Episode # 179

    Today's Guest: Scott Kelly, Founder & CEO, Black Dog Venture Partners

    He has been working on both sides, with entrepreneurs and investors alike, for more than three decades. Harnessing his innovative skills, vast experience training thousands of salespeople, and tapping into his vast network of investors.

    • Website: Black Dog Venture Partners
    • Youtube: VC FastPitch

    What Listeners Will Learn:

    • How AI is changing the economics of building and scaling startups
    • Why many founders may not need venture capital as early as they think
    • Lessons from past technology cycles that still apply in the GenAI era
    • How investors evaluate AI-driven businesses beyond surface-level hype
    • Why timing, discipline, and execution matter more than tools
    • What founders often misunderstand about pitching, capital, and exits
    • How AI lowers build costs but raises the importance of strategic judgment
    Resources:
    • Website: Black Dog Venture Partners
    • YouTube: VC FastPitch
    Show More Show Less
    29 mins
  • Building AI Products That Users Actually Trust, Lessons from Angshuman Rudra
    Jan 11 2026

    January has a very particular energy.

    The holidays are behind us. The inbox is slowly filling up again. Calendars are waking up. And there's always this short window, just a few quiet days, where it feels like everything could still go in a different direction.

    I've been thinking a lot during this pause.

    Over the last couple of years, AI and large language models have gone from experiments to expectations. What used to feel optional is now part of daily work, whether someone asked for it or not. And the biggest shift I've personally noticed isn't technical.

    It's psychological.

    People aren't asking "What can AI do?" anymore.

    They're asking "What should we actually build?", "What do we trust?", and "What's worth shipping versus waiting?"

    That question shows up everywhere, especially in product teams.

    Because as exciting as LLMs are, shipping the wrong AI feature is worse than shipping none at all.

    And that's exactly why today's conversation matters.

    This episode is not about hype.

    It's about judgment, timing, and responsibility in product leadership.

    Chapters:

    00:00 Introduction to Angshuman Rudra
    01:06 The Impact of Large Language Models on Product Management
    03:14 Balancing Innovation and User Needs
    04:37 Navigating Generative AI in Product Development
    06:46 Driving Adoption of New Features
    09:34 Challenges and Lessons in Generative AI Products
    11:15 Evolving Roles of Product Leaders with AI
    12:39 The Future of Multi-Agent Systems
    14:36 Translating User Requirements into Product Features
    17:31 Finding the Next Big Feature
    19:56 Adopting AI in Development Cycles
    21:24 Tips for Job Seekers in Tech
    23:10 Market Shifts in Marketing Technology
    25:01 Exciting Use Cases in Marketing Technology
    26:52 Concluding Thoughts and Future Outlook

    Episode # 178

    Today's Guest: Angshuman Rudra, AI Product Leader, building Martech platforms, AI Agents, and data workflows for 500+ agencies.

    Angshuman Rudra is a senior product executive at TapClicks, where he leads a portfolio of data, analytics, and AI products for a market-leading martech platform.

    • Website: Angshuman Rudra

    What Listeners Will Learn:

    • How to evaluate real user demand for AI features (not hype)
    • When AI adds value and when it creates unnecessary complexity
    • How product leaders should think about LLMs as tools, not magic
    • Why many AI features fail after launch
    • How to balance innovation with resource constraints
    • What "AI adoption" actually looks like inside real companies
    • Why multi-agent systems are promising but not ready to be fully autonomous
    • How PMs can use AI for research, specs, and design without losing judgment
    • What skills will matter most for product leaders over the next 3–5 years

    Resources:
    • Angshuman Rudra
    Show More Show Less
    34 mins
  • How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen
    Jan 4 2026

    In this episode of Open Tech Talks, host Kashif Manzoor sits down with Bobbie Chen, a product manager working at the intersection of fraud prevention, cybersecurity, and AI agent identification in Silicon Valley.

    As generative AI and large language models rapidly move from experimentation into real products, organizations are discovering a new reality. The same tools that make building software easier also make abuse, fraud, and attacks easier. Vibe coding, AI agents, and LLM-powered workflows are accelerating innovation, but they are also lowering the barrier for bad actors.

    This conversation breaks down why security, identity, and access control matter more than ever in the age of LLMs, especially as AI systems begin to touch authentication, customer data, financial workflows, and enterprise knowledge. Bobbie shares practical insights from real-world security and fraud scenarios, explaining why many AI risks are not entirely new but become more dangerous when speed, automation, and scale increase.

    The episode explores how organizations can adopt AI responsibly without bypassing decades of hard-earned security lessons. From bot abuse and credit farming to identity-aware AI systems and OAuth-based access control, this discussion helps listeners understand where AI changes the threat model and where it doesn't.

    This is not a hype-driven episode. It is a grounded, experience-backed conversation for professionals who want to build, deploy, and scale AI systems without creating invisible security debt.

    Episode # 177

    Today's Guest: Bobbie Chen, Product Manager, Fraud and Security at Stytch

    Bobbie is a product manager at Stytch, where he helps organizations like Calendly and Replit fight against fraud and abuse.

    • LinkedIn: Bobbie Chen

    What Listeners Will Learn:

    • How LLMs and AI agents change the economics of fraud and abuse, making attacks cheaper, faster, and more customized
    • Why vibe coding is powerful for experimentation, but risky when used without security review in production systems
    • The difference between exploring AI ideas and asking users to trust you with sensitive data
    • Standard security blind spots in AI-powered apps, especially around authentication, parsing, and edge cases
    • Why organizations should not give AI systems blanket access to enterprise data
    • How identity-aware AI systems using OAuth and scoped access reduce risk in RAG and enterprise search
    • Why are many AI security failures process and organizational problems, not tooling problems
    • How fraud patterns like AI credit farming and automated abuse are emerging at scale
    • Why security teams must shift from being gatekeepers to continuous partners in AI adoption
    • How professionals in security, product, and engineering can stay current as AI threats evolve
    Resources:
    • Bobbie Chen
    • The two blogs I mentioned:
    • Simon Willison: https://simonwillison.net
    • Drew Breunig: https://www.dbreunig.com
    Show More Show Less
    32 mins
  • How Dyslexic Brains Can Supercharge AI Thinking with Prof. Russell Van Brocklin
    Dec 6 2025

    In this episode of Open Tech Talks, I sit down with Professor Russell Van Brocklin, a New York State Senate-funded researcher, known as "The Dyslexic Professor," to unpack a very different way of thinking about AI, problem-solving, and dyslexia.

    Russell's work sits at the intersection of cognitive enhancement and AI integration.

    He shows how an "overactive" front part of the dyslexic brain (word analysis and articulation) can be turned into a superpower not just for dyslexic learners, but for professionals and businesses working with AI.

    We talk about how his program took dyslexic high-school students who were writing like 12-year-olds and, in one school year, moved them up 7–8 grade levels in writing… at a fraction of the cost of traditional dyslexia programs.

    From there, he connects it to AI collaboration: how the same mental models (context → problem → solution) can make anyone dramatically more effective when working with LLMs like ChatGPT.

    Episode # 176

    Today's Guest: Russell Van Brocklen, Dyslexia Professor

    Russell Van Brocklen speaking, the Dyslexia Professor, shifting daily reading frustrations into confident academic wins for students facing dyslexia

    • Youtube: RussellVan

    What Listeners Will Learn:

    • How dyslexic thinking becomes a competitive advantage in the age of AI
    • Why the dyslexic brain processes information differently, and how that translates into deeper reasoning
    • A practical framework for working with AI: context → problem → solution
    • How to use "hero, universal theme, and villain" to sharpen thinking and guide AI more effectively
    • How to perform word analysis with AI (action words, synonyms, key concepts) to get more focused outputs
    • A step-by-step way to compress long AI responses into clear, structured insights
    • How to generate business solutions by running context through a "universal theme lens"
    • Why AI is exceptional for first drafts and why humans must still lead the final edits
    • How dyslexic learners can use deep reading and repetition for breakthroughs in comprehension
    • Practical strategies for teachers in the AI era: how to allow AI but still ensure authentic student work
    • How non-technical users can collaborate with AI to write books, solve problems, and accelerate learning
    • Real stories of professionals and students transforming their work through structured AI thinking
    Resources:
    • RussellVan
    Show More Show Less
    30 mins