Episodes

  • Legaltech Civil War: Talbot West CEO Jacob Andra & Advisor Adam Wardel Discuss AI Adoption in Law
    Dec 20 2025

    Send us a text

    YouTube Video Description

    Law firms face a civil war over AI adoption. On one side, a model that's worked for decades, generating revenue and establishing power structures. On the other, an intelligence revolution that won't disappear in ten years.

    In this episode, host Jacob Andra sits down with Adam Wardel, an attorney with 12+ years of experience spanning in-house and law firm roles. Adam sits on Talbot West's advisory board, where he brings legal and compliance expertise to the firm's AI transformation work. He advises his clients and Talbot West on navigating AI adoption in regulated environments.

    Jacob Andra is CEO of Talbot West, an AI advisory and implementation firm, and host of The Applied AI Podcast.

    Adam makes the case that AI should be thought of as an actual intelligence working alongside you. Not a dashboard you log into. Not another SaaS product adding to your tech sprawl. An intelligence that reviews contracts before you wake up, surfaces only what needs your attention, and handles the routine so you can do the deep thinking that actually requires a human brain. He describes waking up to find that an AI has already reviewed a contract, prepared a brief, and drafted an edited version. All he needs to do is put on his "deep thinking hat" and apply strategic judgment. The routine work is done. The intelligence responds to emails, sets up follow-up appointments, and works around the clock so the attorney can focus on what actually requires human expertise.

    The conversation turns to the trap of solving narrow problems. You find a tool that does one thing well (calendaring, discovery review, whatever) and you adopt it. Then another tool for another problem. Before long, you've got a dozen dashboards, fragmented workflows, and you've introduced as much inefficiency as you've eliminated. Jacob points out that even good platforms like Harvey, which handle a basket of related tasks, still create integration challenges with other parts of your workflow. You end up with less tech sprawl than the point-solution approach, but sprawl nonetheless.

    The alternative: architect the whole system. Map your workflows end-to-end. Understand where AI can handle 90% of the work versus where humans need to stay heavily involved. Build toward organizational intelligence rather than collecting point solutions. This requires understanding the full landscape of what a firm needs, then designing a set of trade-offs optimized for that specific context. Not a one-size-fits-all platform. Not a collection of tools that don't talk to each other. A coherent architecture that evolves as capabilities improve.

    Adam emphasizes that law firm leaders need to bring in people smarter than themselves on this topic. Partners who've reached senior positions are used to knowing the answers. But AI implementation requires different expertise. The best approach is to surround yourself with people who understand the technology deeply, then provide oversight based on your experience with the practice of law.

    Jacob stresses that this outside expertise must be vendor-neutral. If your technology advisor represents specific platforms, they'll recommend those platforms whether they fit or not.

    The paradigm of the future decouples functionality from interface. Jacob calls this "invisible AI." Intelligence runs in the background. It surfaces touchpoints only when needed. The old model of managing multiple tools gives way to something more integrated and seamless. You don't log into AI. AI is simply embedded in how work gets done.

    Jacob makes a crucial point about competitive advantage. If a solution is easy, everyone will adopt it. It becomes table stakes. The firms that pull ahead are the ones doing the harder work of architecting comprehensive systems, understanding dependencies bet

    Show More Show Less
    31 mins
  • The Best Machine Learning Model, Lumawarp, Rocks the TabArena Test: Jacob Andra & Dr. Alexandra Pasi
    Dec 18 2025

    Send us a text

    Lumawarp delivers 7% higher accuracy than leading ML models while running 300+ times faster. On the TabArena HELOC default prediction benchmark, it topped the accuracy leaderboard while training on a gaming laptop in about an hour. Competing methods required hundreds of hours on large compute clusters to achieve worse results.

    This is the breakthrough that breaks the accuracy/speed tradeoff that has constrained machine learning for decades.

    In this episode, Talbot West CEO Jacob Andra sits down with Dr. Alexandra Pasi, CEO of Lucidity Sciences, to explore how Lumawarp achieves these results and what it means for enterprises building AI systems where precision is non-negotiable and milliseconds matter.

    The technology employs a novel mathematical framework grounded in partial differential equations and geometric manifold regularization. Rather than relying on deep learning or tree-based methods that struggle with sparse or imbalanced data, Lumawarp constructs optimal kernels directly from training data. The result: superior pattern recognition with microsecond inference times, deployable on edge devices without sacrificing accuracy.

    In this conversation, we cover:

    Benchmark results showing Lumawarp outperforming XGBoost, MNCA, and other leading models on structured data tasks

    Why a few percentage points of accuracy improvement translates to millions of dollars in fraud detection, clinical decision support, and risk modeling

    Microsecond inference enabling real-time applications in high-frequency trading, robotics, and predictive maintenance

    Edge deployment capabilities for wearables, industrial sensors, and environments where cloud connectivity isn't reliable

    The critical difference between models optimized for linguistic plausibility (LLMs) versus mathematical precision (Lumawarp)

    How the Talbot West and Lucidity Sciences partnership works: Lumawarp solves the prediction problem, Talbot West solves the deployment problem

    As Dr. Pasi explains, traditional ML forces you to choose: fast models sacrifice accuracy, accurate models require massive compute. Lumawarp sits completely outside that tradeoff curve, delivering both simultaneously.

    For high-stakes applications where 90% accuracy means a 1-in-10 failure rate, and 99% accuracy means 1-in-100, that difference determines whether you can deploy ML at all.

    This episode is essential viewing for executives evaluating AI investments, data scientists looking beyond the LLM hype cycle, and anyone building systems where accuracy and latency both matter.

    About the Guest:
    Dr. Alexandra Pasi is CEO and co-founder of Lucidity Sciences. A PhD mathematician, she spent over a decade advancing the mathematical foundations of machine learning before pioneering the GPU-parallelizable geometric manifold regularization techniques that became Lumawarp. Her work has demonstrated real-world impact across healthcare (predicting hospital-acquired conditions), finance (high-frequency trading), and scientific research (particle physics detection).

    About Talbot West:
    Talbot West is an AI enablement firm specializing in enterprise digital transformation. The firm combines full-spectrum AI expertise with Fortune 500 systems architecture methodology, helping organizations deploy the right AI technologies for the right problems. Learn more at talbotwest.com

    About Lucidity Sciences:
    Lucidity Sciences develops advanced machine learning technologies for pattern identification and prediction in structured data. Their research-driven approach addresses fundamental limitations in existing ML methods, delivering breakthrough improvements in model accuracy, generalizability, and computational efficiency. Learn more at luciditysciences.com

    Show More Show Less
    13 mins
  • Constitutional AI With Bennett Borden and Jacob Andra
    Nov 11 2025

    Send us a text

    Talbot West CEO Jacob Andra interviews Clarion AI CEO Bennett Borden on ensemble AI approaches.

    Bennett Borden served eight years as a CIA data scientist identifying patterns in digital trails, he went to Georgetown Law and specialized in automated decision systems. Now as CEO of Clarion AI, he runs the only law firm that operates as both legal counsel and development shop, building AI systems that drive business value while maintaining legal compliance.

    This episode explores multi-agent AI architectures. Borden explains constitutional AI, developed by Anthropic, which programs AI behavior through plain language directives rather than thousands of lines of code. Building with generative AI resembles forming psychology rather than writing deterministic algorithms.

    Jacob pushes on the practical challenges of large context windows, where language models become unreliable when processing massive amounts of information. He describes the wobbliness that emerges when models forget what's over here when they're processing over there, and discusses neurosymbolic approaches that use ontological skeletons to help LLMs maintain context. This leads to a deeper discussion of ensemble architectures where specialized agents handle bounded contexts rather than expecting single models to manage everything.

    Real implementations combine retrieval augmented generation with constitutional AI and adversarial oversight modules that audit primary agent behavior. These patterns, where modules challenge each other's findings rather than simply cooperating, create robust outcomes that monolithic systems cannot match.

    The conversation covers practical enterprise applications. Back office automation handles repetitive, data centric tasks where companies apply the same judgments repeatedly. Knowledge worker augmentation transforms how lawyers, consultants, and accountants work. Borden estimates 80% of legal work can be better handled by AI, freeing professionals to focus on the quintessentially human 20% that requires judgment and strategic thinking.

    Jacob probes the definition of agentic AI, noting that almost no one knows what they mean when they use the term. He identifies at least four or five common but conflicting connotations. Borden clarifies that agentic AI is fundamentally a recommendation engine on steroids, where an AI subcomponent makes decisions based on parameters it's given as part of a larger orchestration. This aligns with Talbot West's emphasis on coordinated systems rather than autonomous agents making high stakes decisions without oversight.

    Data value extraction emerges as a critical theme. Companies sit on information locked in emails and file systems. Properly curated knowledge bases combined with constitutionalized AI surface insights that distinguish products and services. A retail client's app pulls weather and event data to adjust operations dynamically, increasing cookie production before predicted afternoon rushes. Borden describes predictive compliance systems that monitor for behavior patterns correlating with fraud.

    The discussion addresses ensemble architectures that scale from individual modules to nested systems of systems. Specialized modules handle discrete tasks, feeding into domain ensembles that synthesize insights. Higher level meta-ensembles correlate patterns across domains, identifying coordinated activities invisible when viewing any single domain alone. Both speakers emphasize explainability and human oversight, with clear audit trails for every decision.

    Talbot West delivers Fortune 500 AI consulting to midmarket and enterprise organizations through its APEX framework and Cognitive Hive AI architecture.

    Visit talbotwest.com

    Show More Show Less
    39 mins
  • Will AI Take All the Jobs? Jacob Andra and Stephen Karafiath Say No
    Oct 31 2025

    Send us a text

    While people fear wholesale workforce replacement, the actual transformation is far more complex and ultimately more optimistic for organizations willing to adapt strategically.

    This episode cuts through the hype to examine three distinct zones of AI capability. First, tasks where AI excels at things humans never could do well, like fraud detection algorithms or protein folding analysis. Second, uniquely human domains like relationship building and creative problem solving across diverse contexts. And third, the contested middle ground where AI augments but doesn't replace human workers.

    Jacob and Stephen share real insights from Talbot West's consulting work, including an aerospace manufacturer case where their top recommendation wasn't an AI solution at all. It was hiring a human to orchestrate digital transformation across departments. This reveals a fundamental truth: the future isn't humans versus AI. It's humans working with AI as force multipliers.

    Large language models get conflated with AI itself, but they represent one narrow slice of available technology. They excel within certain domains but fail catastrophically when pushed beyond those boundaries. That's why Talbot West pursues two complementary approaches to expand AI capabilities beyond current LLM limitations.

    Neurosymbolic AI combines neural networks with symbolic logic structures. Think of AlphaGo, which paired a neural network exploring game possibilities with a mathematical language enforcing the rules. The neural component provides creativity and pattern matching. The symbolic structure keeps everything grounded in reality and prevents hallucinations.

    Cognitive Hive AI takes a different approach by orchestrating multiple specialized AI modules into coordinated systems. A single large language model might serve as just one small component, perhaps handling translation between machine language and human users. Other modules handle specific tasks like sentiment analysis, predictive analytics, or compliance monitoring. Together, they create business capabilities no single AI could achieve alone.

    The MIT study claiming 95% of AI projects fail to see ROI likely reflects implementations that lacked this level of strategic thinking. When you bring proper analysis and architecture to AI deployment, returns become inevitable. Talbot West's customer feedback suggests near-universal satisfaction when projects are scoped correctly from the start.

    Organizations face a choice in how to handle this productivity multiplier. The short-term approach fires people and maintains current output with fewer workers. The strategic approach keeps the workforce intact and uses AI augmentation to scale operations dramatically without proportional headcount increases. Companies taking the second path position themselves for massive competitive advantage.

    This gets incredibly nuanced when you consider all the variables at play. Different job types face different displacement risks. Various AI technologies have different strengths and limitations. Neurosymbolic systems excel at different tasks than ensemble architectures. Single machine learning algorithms solve different problems than large language models. Understanding these distinctions matters enormously when planning organizational transformation.

    You absolutely need humans in your company, but the nature of their work will shift. AI involvement will vary dramatically across roles from 1% to 100% depending on the specific tasks and available technology. Success requires bringing rigorous analysis to determine exactly where and how AI augments your workforce.

    Learn more about Talbot West's approach to AI implementation: https://talbotwest.com

    Show More Show Less
    19 mins
  • Agentic AI and Neurosymbolic AI: Jacob Andra Interviews Dr. Alexandra Pasi of Lucidity Sciences
    Oct 27 2025

    Send us a text

    Two major ideas are shaping the next era of artificial intelligence: agentic AI and neurosymbolic AI. Talbot West CEO Jacob Andra and Lucidity Sciences CEO Dr. Alexandra Pasi bring together their complementary perspectives.

    They unpack the confusion surrounding the term “agentic.” The most common misuses fall into three categories.

    1.Digital employee. This use assumes an AI can fully replace a human role. In practice, jobs consist of overlapping tasks that depend on judgment, context, and social understanding. Substituting a human one-to-one with an AI system oversimplifies work and introduces risk.

    2.AI interacting with humans. Many products describe themselves as agentic simply because they interact with people. Yet a chatbot or outbound assistant is not necessarily intelligent or autonomous. Interface does not equal agency.

    3.Autonomous executor. Another common assumption is that an AI that performs tasks independently qualifies as agentic. Yet there are non-AI autonomous systems.

    Jacob proposes a definition that is specific enough for real-world planning: an AI function able to complete a task as part of a larger ensemble or capability. This definition treats agentic systems as modular and composable. Each agent performs a defined function within a coordinated network of systems. This approach moves the conversation from vague marketing language to measurable performance outcomes.

    From there, the discussion turns to large language models. Both Jacob and Alexandra acknowledge their extraordinary power but also their limitations. LLMs have made AI accessible to everyone through natural language, allowing rapid knowledge retrieval, summarization, and idea generation. At the same time, language itself is a constraint. Human language was not built for exact quantitative reasoning or precise logical relationships. LLMs lose reliability when they are asked to maintain long context or handle tightly coupled data. The guests agree that these models should be viewed primarily as interface layers that help people and organizations communicate with structured information systems.

    The conversation then transitions to neurosymbolic AI, which combines neural networks and symbolic reasoning into a single architecture. The neural components are probabilistic and pattern-oriented. They generalize and infer. The symbolic components operate on defined rules and logical constraints. They ensure structure, coherence, and traceability. When combined you get an intelligent system that is both adaptive and verifiable.

    Dr. Pasi explains how this concept has deep roots in earlier AI research. In some early mathematics experiments, language models were paired with formal systems like Lean to verify every logical step. In modern enterprise applications, this same hybrid pattern provides a way to reconcile innovation with control. It creates a bridge between the flexibility of learning models and the accountability required by governance and compliance.

    Jacob shares two Talbot West use cases that illustrate these ideas. The first involves enterprise evaluation and roadmapping. Many organizations have complex, organically grown processes and data flows that are difficult to map or optimize.

    The second example is BizForesight, a platform to help business owners understand and improve company value. It combines document ingestion, interviews, and machine learning within a defined symbolic framework. The symbolic layer enforces valuation logic and methodological integrity, while the neural layer interprets unstructured data and provides adaptive recommendations.


    Show More Show Less
    42 mins
  • Neurosymbolic AI and the Shortcomings of LLMs: Jacob Andra and Stephen Karafiath
    Oct 17 2025

    Large language models have captured headlines, but they represent only a fraction of what AI can accomplish. Talbot West co-founders Jacob Andra and Stephen Karafiath explore the fundamental limitations of LLMs and why neurosymbolic AI offers a more robust path forward for enterprise applications.

    LLMs sometimes display remarkable contextual awareness, like when ChatGPT proactively noticed specific tile flooring in a photo's background and offered unsolicited cleaning advice. These moments suggest genuine intelligence. But as Jacob and Stephen explain, push these systems harder and the cracks appear.

    The hosts examine specific failure modes that emerge when deploying LLMs at scale. Jacob documents persistent formatting errors where models swing between extremes—overusing lists, then refusing to use them at all, even when instructions explicitly define appropriate use cases. These aren't random glitches. They reveal systematic overcorrection behaviors where LLMs bounce off guardrails rather than operating within defined bounds.

    More troubling are the logical inconsistencies. When working with large corpuses of information, LLMs demonstrate what Jacob calls cognitive fallacies—errors that mirror human reasoning failures but stem from different causes. The models cannot maintain complex instructions across extended tasks. They hallucinate citations, fabricate data, and contradict themselves when context windows stretch too far. Even the latest reasoning models cannot eliminate certain habits, like the infamous em-dash overuse, no matter how explicitly you prompt against it.

    Stephen introduces the deny-affirm construction as another persistent pattern: "It's not X, it's Y" formulations that plague AI-generated content. Tell the model to avoid this construction and watch it appear anyway, sometimes in the very next paragraph. These aren't bugs to be patched. They're symptoms of fundamental architectural limitations.

    The solution lies in neurosymbolic AI, which combines neural networks with symbolic reasoning systems. Jacob and Stephen use an extended biological analogy: LLMs are like organisms without skeletons. A paramecium works fine at microscopic scale, but try to build something elephant-sized from the same squishy architecture and it collapses under its own weight. The skeleton—knowledge graphs, structured data, formal logic—provides the rigid structure necessary for complex reasoning at scale.

    Learn more about neurosymbolic approaches: https://talbotwest.com/ai-insights/what-is-neurosymbolic-ai

    About the hosts:

    Jacob Andra is CEO of Talbot West and serves on the board of 47G, a Utah-based public-private aerospace and defense consortium. He pushes the limits of what AI can accomplish in high-stakes use cases and publishes extensively on AI, enterprise transformation, and policy, covering topics including explainability, responsible AI, and systems integration.

    Stephen Karafiath is co-founder of Talbot West, where he architects and deploys AI solutions that bridge the gap between theoretical capabilities and practical business outcomes. His work focuses on identifying the specific failure modes of AI systems and developing robust approaches to enterprise implementation.

    About Talbot West:

    Talbot West delivers Fortune 500-level AI consulting and implementation to midmarket and enterprise organizations. The company specializes in practical AI deployment through its proprietary APEX (AI Prioritization and Execution) framework and Cognitive Hive AI (CHAI) architecture, which emphasizes modular, explainable AI systems over monolithic black-box models.

    Visit talbotwest.com to learn how we help organizations cut through AI hype and implement solutions that deliver measurable results.

    Show More Show Less
    35 mins
  • Data Security in AI: Talbot West CEO Jacob Andra Interviews Scott Peiffer of i4Ops
    Oct 7 2025

    Most enterprise AI projects fail because companies hold back their data. They spend hundreds of thousands of dollars training models on sanitized datasets, afraid to expose sensitive information. They get generic answers that create no competitive advantage.

    In this episode, Scott Peiffer from i4Ops cuts through the AI hype to address the real challenge facing enterprises: how to deploy AI systems that actually create value while keeping proprietary data secure.

    What you'll learn

    Scott Peiffer brings 35 years of data storage and security experience from Intel, NetApp, and now i4Ops. He explains why the current approach to enterprise AI deployment produces disappointing results and what companies should do instead.

    The FOMO problem Companies receive mandates from the C-suite to "do AI" without clear objectives or strategy. Research shows 90% of these models fail to deliver value because organizations train them on limited data subsets, withholding their most valuable information out of security concerns. Employee data, sales conversations, customer support transcripts, and strategic documents remain locked away, resulting in AI systems that cannot deliver insights specific to the business.

    The challenge compounds when companies lack a systematic approach. They bolt new AI tools onto poorly designed foundations without addressing underlying digital infrastructure issues.

    Why digital transformation comes first Successful AI deployment requires a foundation in broader digital transformation strategy. Companies need to start with a clear end vision, map current systems and processes, and create a stepwise progression rather than bolting new tools onto poorly designed foundations. This means defining where you want to go (higher efficiency, preparing for acquisition, competitive advantage), understanding your current state through systems mapping, and identifying a practical path forward that does not break the bank or disrupt operations.

    Knowledge management as competitive advantage The future requires every competitive organization to maintain an in-house fine-tuned RAG system trained on company-specific knowledge. This means addressing fundamental questions about documentation, data quality, and information flow before implementing AI solutions. Scott emphasizes that approximately 75% of companies now use local data models rather than cloud solutions when dealing with sensitive information. The security wrapper stays in private data centers where organizations maintain complete control.

    The data security gap While data at rest and data in transit receive encryption protection, data in use remains vulnerable. When you download an Excel file to analyze it, that data sits unencrypted on your machine. You can copy it, manipulate it, send it to competitors by accident or malicious intent. When employees ask public AI models to summarize files, that unencrypted data gets ingested into public language models.

    i4Ops' approach Rather than plugging holes after they appear, i4Ops uses a patented virtual machine approach that starts with a default of zero data egress. Data cannot leave the protected environment unless explicitly whitelisted, regardless of credentials or authentication methods.

    Where AI creates the most value Beyond the obvious cost savings in customer support and repetitive tasks, AI delivers transformational value when companies train models on their complete proprietary datasets to solve specific business problems. Scott describes how his team solved a weeks-long coding problem in hours by training a model exclusively on their kernel code. They asked two questions and had their answer.

    Produced by Talbot West, a digital transformation and AI consultancy.

    Show More Show Less
    32 mins
  • AI is Much More Than LLMs: Jacob Andra Interviews Dr. Alexandra Pasi on Cutting-Edge Machine Learning
    Sep 29 2025

    Send us a text

    Dr. Alexandra Pasi (Lucidity Sciences) joins Talbot West CEO Jacob Andra to explore why conflating AI with large language models creates blind spots in enterprise technology strategy. With 15 years in machine learning, Dr. Pasi brings mathematical rigor to practical AI deployment.

    Key discussion topics

    Jacob identifies the linguistic synecdoche in AI discourse: taking LLM characteristics like hallucination and incorrectly applying them to all AI. Dr. Pasi expands on this, explaining that LLMs are just one application of AI to language data. The broader landscape includes supervised learning, computer vision, anomaly detection, and time series forecasting that operate on different principles.

    When Jacob presents real-world scenarios, Dr. Pasi demonstrates technology selection. For supply chain optimization, she recommends supervised structured learning over LLMs. These problems need historical data analysis and forecasting under new conditions. LLMs lack organizational context and carry irrelevant noise. For structured data in spreadsheets or databases, specialized models outperform language models.

    The generalizability problem

    Dr. Pasi reveals why machine learning often fails: models excel on training data but collapse in production. Auto ML combines multiple models for good initial fit but poor generalization. Her company's AF1 technology addresses this through new mathematical frameworks that find non-linear patterns traditional algorithms miss.

    Three implementations demonstrate this approach. In clinical care, AF1 predicts ICU pressure injuries better than 80 Auto ML models combined. Financial trading applications find actual market dynamics rather than historical coincidences. Particle physics implementations detect rare events without losing signal in noise.

    Digital transformation insights

    Organizations miss opportunities by automating tasks without questioning why they exist. Dr. Pasi explains how companies created siloed roles that now reveal workflow gaps when automated. The real value comes from reorganizing information flow, not just automating existing processes.

    For problems without historical data, she describes using directed acyclic graphs to map causality, then generating synthetic data with controlled variations. This enables simulation and optimization without costly real-world experiments.

    Practical implementation guidance

    Both experts emphasize starting with business problems, not technology. Many challenges need basic algebra, not complex AI. Dr. Pasi advocates explicit modeling for understood problems, adding machine learning only where external variables create uncertainty.

    She addresses risk concerns, noting hallucination affects only certain AI types, not supervised learning on structured data. Warning against compute-heavy solutions with surprise cloud bills, she recommends lightweight alternatives that maintain accuracy while enabling edge deployment on mobile devices and wearables.

    Success requires identifying where AI impacts the P&L. The best executed project means nothing without clear financial outcomes or defined steps in a roadmap showing ROI.

    About The Applied AI Podcast

    The Applied AI Podcast delivers practical AI implementation guidance for enterprise, government, and defense sectors. Produced by Talbot West, episodes feature practitioners deploying AI in production, translating capabilities into business outcomes.

    Subscribe at https://appliedaipod.com Learn more at talbotwest.com

    #AppliedAI #MachineLearning #EnterpriseAI #StructuredData #SupervisedLearning #ComposableAI #ModularAI #AIStrategy #DigitalTransformation #PredictiveAnalytics


    Show More Show Less
    39 mins