• Ep 102: Enabling an Intelligent, Efficient, and Human-Centered Hiring Experience with Adam Gordon
    Jan 23 2026
    In this insightful and forward-looking conversation, Bob Pulver speaks with Adam Gordon, co-founder and CEO of Poetry, about the rise of hiring enablement and how AI can be used to create consistency, speed, and scalability in talent acquisition. Adam reflects on his entrepreneurial journey from Candidate.ID to Poetry, unpacks the MOLT framework (Marketing, Operations, Learning, Tools), and explains how Poetry integrates AI to support recruiters and hiring managers with streamlined processes and guardrails to ensure quality and compliance. They also explore deeper workforce challenges like trust, burnout, and AI’s societal impact—especially in the context of shrinking employee tenure and the future of work. Keywords Adam Gordon, Poetry, hiring enablement, recruiter enablement, AI agents, MOLT framework, Candidate.ID, talent acquisition, recruiter productivity, ATS integration, AI guardrails, employer brand, candidate experience, AI governance, trust in leadership, DEI, burnout, workforce automation, staffing industry, responsible AI, talent intelligence Takeaways Adam Gordon’s journey from recruiting to tech entrepreneurship has been shaped by the need to empower recruiters with better tools and processes. Poetry was created as a hiring enablement workspace to reduce reliance on fragmented point solutions and to streamline recruiter workflows. The MOLT framework (Marketing, Operations, Learning, Tools) organizes recruiter needs in a way that supports end-to-end hiring activity. Poetry emphasizes product design simplicity and consistency, integrating AI without exposing users to the risks of hallucination or inconsistent prompts. Recruiters using Poetry can save up to 25% of their time per day, but there's concern about how organizations reinvest those gains. Guardrails are built into Poetry to ensure a consistent employer brand, tone, and candidate experience—especially important given drops in organizational trust. The move from “recruiter enablement” to “hiring enablement” reflects how recruiters and hiring managers must work together in today’s TA ecosystems. A new Poetry workspace tailored for staffing companies is set to launch in Q2 2026, signaling the platform’s evolution and market expansion. Quotes “Recruiting is a team sport.” “We’ve put such strong guardrails in place, it’s not possible for Poetry to hallucinate.” “We wanted to eliminate recruiters having to log into 30 different tools to do their job.” “I’ve described it as an age of employment brutality—CEOs don’t want more people on payroll.” “The trust barometer is dropping, and without trust, the candidate experience and employer brand collapse.” “Just because you can build something doesn’t mean you’ve built a technology company.” Chapters 00:00 - Introduction and Adam’s Background 01:17 - From Social Media Search to Candidate.ID 05:32 - The Vision Behind Poetry 07:27 - Simplicity, Product Design, and AI Agents 09:16 - MOLT: Marketing, Operations, Learning, Tools 11:16 - ATS Integration and 25% Time Savings 14:05 - The Reinvestment Dilemma 18:34 - Talent Intelligence and Bite-Sized Research 22:01 - Guardrails Over Free Prompting 24:51 - Mitigating Risk and Ensuring Consistency 29:58 - From Recruiter to Hiring Enablement 33:40 - Empowering Employer Brand and Talent Attraction 37:50 - The Importance of Trust and Communication 43:25 - Turnover, Tenure, and the Workforce Equation 49:22 - Responsible AI and Societal Impact 54:35 - Creative AI Tools and Industry Disruption 56:44 - Building a Scalable Tech Company 59:46 - 2026 Preview: Poetry for Staffing Companies Adam Gordon: https://www.linkedin.com/in/adamwgordon/ Poetry: https://www.poetryhr.com/ For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    59 mins
  • Ep 101: Reshaping the Workforce Through Sensemaking and Trusted Talent Intelligence with Vijay Swami
    Jan 16 2026
    Bob Pulver talks with Vijay Swami, Co-Founder and CEO of Draup, a global leader in AI-powered talent intelligence. Vijay shares his journey from early roles in call center forecasting to founding a management consultancy and then TalentNeuron, later acquired by CEB. With deep roots in data science and a vision for empowering internal analytics teams, Vijay built Draup to tackle labor market complexity using advanced AI, unstructured data, and rich taxonomies. Vijay and Bob discuss building trusted, AI-powered talent intelligence platforms that bridge data complexity and business decision-making, and how human-centric, explainable AI is reshaping strategic workforce planning. They cover the growing importance of verification skills, ethical AI practices, the future of people analytics, the architecture of trusted and explainable AI systems, and the evolving role of humans and agents in enterprise workflows. Keywords Vijay Swami, Draup, AI in HR, People Analytics, Strategic Workforce Planning, verification skills, ethical AI, talent intelligence, agentic AI, skills-based hiring, cloud data, explainability, trust, synthetic data, digital twins, ETTER, Curie, job displacement, augmented intelligence, transparency Takeaways AI's value in HR lies in sense-making from complex and unstructured data, not just simplifying workflows. Verification skills—like content and narrative validation—are emerging as critical in a world flooded with AI-generated data. Draup’s AI agent Curie supports HR and analytics professionals with leadership-ready narratives and scenario planning. The platform's ETTER model goes beyond job descriptions to assess real work through contracts, SLAs, and KPIs. Transparency and traceability are foundational to building trust in AI systems; Draup compares its models against industry benchmarks. Ethical AI practices include open documentation, interpretability, and empowering analysts to correct or clarify information. AI should not be viewed solely as a job killer; clear, specific skills definitions in job postings can increase hiring and help target investments. True transformation requires shifting from jobs to workflows and task orchestration, blending human effort, AI agents, and automation. Quotes “We want to tell the story—not just show the data—to help people analytics become a leadership engine.” “Verification skills are the next battery of capabilities organizations must build for a trustworthy enterprise.” “Transparency is about giving customers the right to know—even if they don’t ask.” “HR has the opportunity to become heroes in this AI wave by unlocking the true nature of work.” “We should be therapists for data anxiety—helping organizations see what’s real versus what’s a myth.” “I’m a net AI job creator guy—because there’s no shortage of work, just a need to match skills and workflows more intelligently.” Chapters 00:05 - Introduction and Vijay’s background 00:57 - From forecasting analyst to AI-powered platforms 03:18 - Rethinking labor intelligence beyond job descriptions 05:39 - Building a sense-making engine from complex data 07:42 - Storytelling, context, and executive alignment 11:15 - The rise of verification skills 14:04 - Creating a trusted and transparent AI ecosystem 19:31 - Unlocking the true nature of work through ETTER 22:44 - Ethical AI and human-centric design 32:19 - How data becomes a therapeutic tool 35:14 - AI’s real impact on jobs and skills demand 45:25 - Strategic work planning beyond job roles 49:19 - Optimism, augmentation, and future-proofing teams 50:34 - Closing thoughts and appreciation Vijay Swami: https://www.linkedin.com/in/vijay-swaminathan-a44101/ Draup: https://draup.com/ For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    50 mins
  • Ep 100: Pulverizing the Journey to Human-Centric AI Readiness with Bob Pulver
    Jan 15 2026
    In this milestone 100th episode, host Bob Pulver reflects on the journey of Elevate Your AIQ, sharing why he started the podcast, what he's learned from nearly 100 conversations, and what’s ahead for the show and its community. He revisits recurring themes such as AI literacy, responsible innovation, and human-centric transformation—connecting them to his personal experiences, professional background, and passion for empowering others. This solo conversation is both a look back and a call to action for individuals and organizations to embrace AI thoughtfully and elevate their AIQ together. Keywords AIQ, AI literacy, responsible AI, human-centric design, talent transformation, skills-based hiring, human potential, CHRO of the future, work redesign, education reform, podcasting, Substack, transformation leaders, automation strategy, AI readiness, AI ethics, trust, transparency, fairness, lifelong learning, community, AI-powered workforce Takeaways Podcasting is a powerful outlet for exploring curiosity, storytelling, and continuous learning—especially for neurodivergent thinkers. Human-centric AI readiness is not just about tools or tech—it’s about mindset, adaptability, and lifelong learning. AIQ exists on three levels: individual, team, and organizational—each requiring a blend of skills, tools, and ethical judgment. Responsible AI is central to modern transformation—touching on transparency, fairness, ethics, and explainability. CHROs and people leaders have dual responsibilities as strategic architects of work and catalysts for responsible innovation. Hiring for skills and potential—rather than pedigree—is crucial to unlocking hidden talent and countering bias. Education and talent development must evolve to equip students and workers with the durable skills of the AI-powered future. Communities of practice and peer generosity are vital to collective learning and resilience in this era of rapid change. Quotes “Use AI where you should, not wherever you can.” “We’ve always adapted to new technologies—this time is no different.” “Human-centricity and human potential are key overarching themes of this show, and of the future of work.” “AIQ isn’t just about literacy—it’s about readiness, judgment, and mindset.” “If you are a DEI advocate, you are now a responsible AI advocate.” “You can control your own destiny—you’re capable of more than you think.” Chapters 00:00 Welcome and Gratitude for Episode 100 00:50 Human-Centric AI and the Purpose of the Show 02:32 Authenticity, Creativity, and Focus 04:35 My Background: Corporate to Independent 07:18 Early Exposure to AI at IBM and Personal Stakes 09:55 Start with Processes and Business Challenges, Not Tech 11:48 Three Levels of AIQ: Individual, Team, Org 13:45 Beyond Prompting: Augmenting Capabilities 15:20 Responsible AI: Use and Design 17:30 The Role of Trust, Transparency, and Fairness 19:50 DEI and Responsible AI Are Inseparable 21:10 Skills-Based Hiring and Hidden Potential 23:00 Designing Work for Human + AI Partnership 25:40 Lifelong Learning and the Future of Education 27:20 CHROs as Architects and Innovation Catalysts 29:30 Offense and Defense in Responsible Innovation 31:00 A Call to Action for Listeners and the Community 32:10 What’s Next: Live Shows, Events, Writing, and Community 33:20 Closing Gratitude and Future Outlook For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    33 mins
  • Ep 99: Advancing Human-Centered AI and Collaborative Intelligence with Ross Dawson
    Dec 26 2025
    Bob Pulver sits down with Ross Dawson, world-renowned futurist, serial entrepreneur, and creator of the Humans + AI community. With decades of foresight expertise, Ross shares his evolving vision of human-AI collaboration — from systems-level transformation to individual cognitive augmentation. The conversation explores why organizations must reframe their approach to talent, capability, and value creation in the age of AI, and how human agency, trust, and fluid talent models will define the future of work. Keywords Ross Dawson, Humans + AI, AI roadmap, ThoughtWeaver, AI teaming, digital twins, augmented thinking, talent marketplaces, future of work, systems thinking, AI in organizations, AI in education, trust in AI, AI-enabled teams, cognitive diversity, latent talent, fluid talent, organizational design Takeaways The “Humans + AI” framework centers on complementarity, not substitution — AI should augment and elevate human potential. AI maturity is not just technical — it requires cultural readiness, mindset shifts, and systems-level thinking. Trust in AI must be calibrated; both over-trusting and under-trusting limit value creation. AI-enabled teams will rely on clear role design, thoughtful delegation of decision rights, and frameworks for collaborative intelligence. Digital twins and AI agents offer different organizational advantages — one mimics individuals, the other scales domain expertise. Organizations must reimagine work as networks of capabilities, not boxes of job descriptions. Talent marketplaces are an early expression of fluid workforce models but require intentional design and leadership buy-in. The most human-centric organizations will be best positioned to attract talent and thrive in the AI era. Quotes “AI should always be a complement to humans — not a substitute.” “We live in a humans + AI world already. The question is how we shape it.” “Mindset really frames how much value we can get from AI — individually and societally.” “You know more than you can tell. That gap between tacit knowledge and what AI can access is where humans still shine.” “Start with a vision — not a headcount reduction. Ask what kind of organization you want to become.” “We can use AI not just to apply existing capabilities but to uncover and expand them.” Chapters 00:00 - Welcome and Ross Dawson’s introduction 01:10 - From futurism to Humans + AI: key focus areas 03:30 - How AI is shifting public curiosity and mindset 06:00 - Systems-level thinking and responsible AI use 08:20 - AI in education and enterprise transformation 11:10 - The rise of AI-augmented thinking 14:00 - Calibrating trust in AI and human roles in teams 17:00 - Designing humans + AI teaming frameworks 20:30 - Delegation models and decision architecture 23:20 - Digital twins vs synthetic AI agents 26:00 - The value of tacit knowledge and cognitive diversity 30:00 - Empowering individuals amidst career uncertainty 32:10 - Breaking out of job “boxes” with fluid talent models 35:00 - Talent marketplaces and barriers to adoption 38:00 - Human-centric leadership in AI-powered transformation 41:00 - Strategic roadmaps and vision-led change 45:30 - Ross’s personal AI tools and experiments 52:00 - Final thoughts on AI’s role in augmenting human creativity Ross Dawson: https://www.linkedin.com/in/futuristkeynotespeaker Humans + AI: https://humansplus.ai For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    53 mins
  • Ep 98: Empowering an AI-Ready Generation to Learn, Create, and Lead with Jeff Riley
    Dec 19 2025
    Bob Pulver speaks with Jeff Riley, former Massachusetts Commissioner of Education and Executive Director of Day of AI, a nonprofit launched out of MIT. They explore the urgent need for AI literacy in K-12 education, the responsibilities of educators, parents, and policymakers in the AI era, and how Day of AI is building tools, curricula, and experiences that empower students to engage with AI critically and creatively. Jeff shares both inspiring examples and sobering warnings about the risks and rewards of AI in the hands of the next generation. Keywords Day of AI, MIT RAISE, responsible AI, AI literacy, K-12 education, student privacy, AI companions, Common Sense Media, AI policy, AI ethics, educational technology, AI curriculum, teacher training, creativity, critical thinking, digital natives, student agency, future of education, AI and the arts, cognitive offloading, generative AI, AI hallucinations, PISA 2029, AI festival Takeaways Day of AI is equipping teachers, students, and families with tools and curricula to understand and use AI safely, ethically, and productively. AI literacy must start early and span disciplines; it’s not just for coders or computer science classes. Students are already interacting with AI — often without adults realizing it — including the widespread use of AI companions. A core focus of Day of AI is helping students develop a healthy skepticism of AI tools, rather than blind trust. Writing, critical thinking, and domain knowledge are essential guardrails as students begin to use AI more frequently. The AI Festival and student policy simulation initiatives give youth a voice in shaping the future of AI governance. AI presents real risks — from bias and hallucinations to cognitive offloading and emotional detachment — especially for children. Higher education and vocational programs are beginning to respond to AI, but many are still behind the curve. Quotes “AI is more powerful than a car — and yet we’re throwing the keys to our kids without requiring any kind of driver’s ed.” “We want kids to be skeptical and savvy — not just passive consumers of AI.” “Students are already using AI companions, but most parents have no idea. That gap in awareness is dangerous.” “Writing is thinking. If we outsource writing, we risk outsourcing thought itself.” “The U.S. invented AI — but we risk falling behind on AI literacy if we don’t act now.” “Our goal isn’t to scare people. It’s to prepare them — and let young people lead where they’re ready.” Chapters 00:00 - Welcome and Introduction to Jeff Riley 01:11 - From Commissioner to Day of AI 02:52 - MIT Partnership and the Day of AI Mission 04:13 - Global Reach and the Need for AI Literacy 06:37 - Resources and Curriculum for Educators 08:18 - Defining Responsible AI for Kids and Schools 11:00 - AI Companions and the Parent Awareness Gap 13:51 - Critical Thinking and Cognitive Offloading 16:30 - Student Data Privacy and Vendor Scrutiny 21:03 - Encouraging Creativity and the Arts with AI 24:28 - PISA’s New AI Literacy Test and National Readiness 30:45 - Staying Human in the Age of AI 34:32 - Higher Ed’s Slow Adoption of AI Literacy 39:22 - Surfing the AI Wave: Teacher Buy-In First 42:35 - Student Voice in AI Policy 46:24 - The Ethics of AI Use in Interviews and Assessments 53:25 - Creativity, No-Code Tools, and Future Skills 55:18 - Final Thoughts and Festival Info Jeff Riley: https://www.linkedin.com/in/jeffrey-c-riley-a110608b Day of AI: https://dayofai.org For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    57 mins
  • Ep 97: Challenging the AI Narrative and Redefining Digital Fluency with Jeff and MJ Pennington
    Dec 12 2025
    Bob sits down with Jeff Pennington, former Chief Research Informatics Officer at the Children’s Hospital of Philadelphia (CHOP) and author of You Teach the Machines, and his daughter Mary Jane (MJ) Pennington, a recent Colby College graduate working in rural healthcare analytics. Jeff and MJ reflect on the real-time impact of AI across generations—from how Gen Z is navigating AI’s influence on learning and careers, to how large institutions are integrating AI technologies. They dig into themes of trust, disconnection, data quality, and what it truly means to be future-proof in the age of AI. Keywords AI literacy, Gen Z, future of work, healthcare AI, trusted data, responsible AI, education, automation, disconnection, skills, strategy, adoption, social media, transformation Takeaways Gen Z’s experience with AI is shaped by a rapid-fire sequence of disruptions: COVID, remote learning, and now Gen AI Both podcast and book You Teach the Machines serve as a “time capsule” for capturing AI’s societal impact Orgs are inadvertently cutting off AI-native talent from the workforce Misinformation, over-hype, and poor PR from big tech are fueling widespread public fear and distrust of AI AI adoption must move from top-down mandates to bottom-up innovation, empowering frontline workers Data quality is a foundational issue, especially in healthcare and other high-stakes domains Real opportunity is in leveraging AI to elevate human work through augmentation, creativity, and access Disconnection and over-reliance on AI are emerging as long-term social risks, especially for younger generations Quotes “It’s a universal fear now. Everyone has to ask: what makes you AI-proof?” “The vitality of democracy depends on popular knowledge of complex questions.” “We're not being given the option to say no to any of this.” “I’m 100% certain the current winners in AI will not be the winners in five to ten years.” Chapters 00:02 Welcome and Guest Introductions 00:48 MJ’s Path: From Computational Biology to Rural Healthcare 01:52 Why They Launched the Podcast You Teach the Machines 03:25 Jeff’s Work at CHOP and the Pediatric LLM Project 06:47 Making AI Understandable: The Book’s Purpose 09:11 Navigating Fear and Trust in AI Headlines 11:31 Gen Z, AI-Proof Careers, and Entry-Level Job Loss 16:33 Why Resilience is Gen Z’s Underrated Superpower 18:48 Disconnection, Dopamine, and the Social Cost of AI 22:42 AI’s PR Problem and the Survival Signals We're Ignoring 25:58 Chatbots as Addictive Companions: Where It Gets Dark 29:56 Choosing to Innovate: A More Hopeful AI Future 32:11 The Dirty Truth About Data Quality and Trust 36:20 How a Brooklyn Coffee Company Fine-Tuned AI with Their Own Data 40:12 Why “Throwing AI on It” Isn’t a Strategy 44:20 Measuring Productivity vs. Driving Meaningful Change 48:22 The Real ROI: Empowering People, Not Eliminating Them 53:26 Healthcare’s Lazy AI Priorities (and What We Should Do Instead) 57:12 How Gen Z Was Guided Toward Coding—And What Happens Now 59:37 Dependency, Education, and Democratizing Understanding 1:04:22 AI’s Impact on Educators, Students, and Assessment 1:07:03 The Real Threat Isn’t Just Job Loss—It’s Human Disconnection 1:10:01 Defaulting to AI: Why Saying "No" Is No Longer an Option 1:12:30 Final Thoughts and Where to Find Jeff and MJ’s Work Jeff Pennington: https://www.linkedin.com/in/penningtonjeff/ Mary Jane Pennington: https://www.linkedin.com/in/maryjane-pennington-31710a175/ You Teach The Machines (book): https://www.audible.com/pd/You-Teach-the-Machines-Audiobook/B0G27833N9 You Teach The Machines (podcast): https://open.spotify.com/show/4t6TNeuYTaEL1WbfU5wsI0?si=bb2b1ec0b53d4e4e For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    1 hr and 10 mins
  • Ep 96: Building Learning Communities for a Responsible Future of Work with Enrique Rubio
    Dec 5 2025
    Bob Pulver sits down with community builder and HR influencer Enrique Rubio, founder of Hacking HR. Enrique shares his journey from engineering to HR, his time building multiple global communities, and why he ultimately returned “home” to Hacking HR to pursue its mission of democratizing access to high-quality learning. Bob and Enrique discuss the explosion of AI programs, the danger of superficial “prompting” education, the urgent need for governance and ethics, and the risks organizations face when employees use AI without proper training or oversight. It’s an honest, energizing conversation about community, trust, and building a responsible future of work. Keywords Enrique Rubio, Hacking HR, Transform, community building, democratizing learning, HR capabilities, AI governance, AI ethics, shadow AI, responsible AI, critical thinking, AI literacy, organizational risk, data privacy, HR community, learning access, talent development Takeaways Hacking HR was founded to close capability gaps in HR and democratize access to world-class learning at affordable levels. The community’s growth accelerated during COVID when others paused events; Enrique filled the gap with accessible virtual learning. Many AI programs focus narrowly on prompting rather than teaching leaders to think, govern, and transform responsibly. Companies must assume employees and managers are already using AI and provide clear do’s and don’ts to mitigate risk. Untrained use of AI in hiring, promotions, and performance management poses serious liability and fairness concerns. Critical thinking is declining, and generative AI risks accelerating that trend unless individuals stay engaged in the reasoning process. Community must be built for the right reasons—transparency, purpose, and service—not just lead generation or monetization. AI strategies often overlook workforce readiness; literacy and governance are as important as tools and efficiency goals. Quotes “Hacking HR is home for me.” “We’re here to democratize access to great learning and great community.” “Prompting is becoming an obsolete skill—leaders need to learn how to think in the age of AI.” “Assume everyone creating something on a computer is using AI in some capacity.” “If managers make decisions based on AI without training, that’s a massive liability.” “Most AI strategies can be summarized in one line: we’re using AI to be more efficient and productive.” Chapters 00:00 Catching up and meeting in person at recent events 01:18 Enrique’s career journey and return to Hacking HR 04:43 Democratizing learning and supporting a global HR community 07:17 The early days of running virtual conferences alone 09:39 Why affordability and access are core to Hacking HR’s mission 13:13 The rise of AI programs and the noise in the market 15:58 Prompting vs. true strategic AI leadership 18:21 The importance of community intent and transparency 20:42 Training leaders to think, reskill, and govern in the age of AI 23:05 Dangers of data misuse, privacy gaps, and dark-web training sets 26:08 Critical thinking decline and AI’s impact on cognition 29:16 Trust, data provenance, and risks in recruiting use cases 31:48 The need for organizational AI manifestos 32:47 Managers using AI for people decisions without training 35:12 Why governance is essential for fairness and safety 39:12 The gap between stated AI strategies and people readiness 43:54 Accountability across the AI vendor chain 46:18 Who should lead AI inside organizations 49:28 Responsible innovation and redesigning work 53:06 Enrique’s personal AI tools and closing reflections Enrique Rubio: https://www.linkedin.com/in/rubioenrique Hacking HR: https://hackinghr.io For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    55 mins
  • Ep 95: Confronting the Realities of Successful AI Transformation with Sandra Loughlin
    Nov 28 2025
    Bob Pulver and Sandra Loughlin explore why most narratives about AI-driven job loss miss the mark and why true productivity gains require deep changes to processes, data, and people—not just new tools. Sandra breaks down the realities of synthetic experts, digital twins, and the limits of current enterprise data maturity, while offering a grounded, hopeful view of how humans and AI will evolve together. With clarity and nuance, she explains the four pillars of AI literacy, the future of work, and why leaning into AI—despite discomfort—is essential for progress. Keywords Sandra Loughlin, EPAM, learning science, transformation, AI maturity, synthetic agents, digital twins, job displacement, data infrastructure, process redesign, AI literacy, enterprise AI, productivity, organizational change, responsible innovation, cognitive load, future of work Takeaways Claims of massive AI-driven job loss overlook the real drivers: cost-cutting and reinvestment, not productivity gains. True AI value depends on re-engineering workflows, not automating isolated tasks. Synthetic experts and digital twins will reshape expertise, but context and judgment still require humans. Enterprise data bottlenecks—not technology—limit AI’s ability to scale. Humans need variability in cognitive load; eliminating all “mundane” work isn’t healthy or sustainable. AI natives—companies built around data from day one—pose real disruption threats to incumbents. Productivity gains may increase demand for work, not reduce it, echoing Jevons’ Paradox. AI literacy requires understanding technology, data, processes, and people—not just tools. Quotes “Only about one percent of the layoffs have been a direct result of productivity from AI.” “If you automate steps three and six of a process, the work just backs up at four and seven.” “Synthetic agents trained on true expertise are what people should be imagining—not email-writing bots.” “AI can’t reflect my judgment on a highly complex situation with layered context.” “To succeed with AI, we have to lean into the thing that scares us.” “Humans can’t sustain eight hours of high-intensity cognitive work—our brains literally need the boring stuff.” Chapters 00:00 Introduction and Sandra’s role at EPAM 01:39 Who EPAM serves and what their engineering teams deliver 03:40 Why companies misunderstand AI-driven job loss 07:28 Process bottlenecks and the real limits of automation 10:51 AI maturity in enterprises vs. AI natives 14:11 Why generic LLMs fail without specialized expertise 16:30 Synthetic agents and digital twins 18:30 What makes workplace AI truly dangerous—or transformative 23:20 Data challenges and the limits of enterprise context 26:30 Decision support vs. fully autonomous AI 31:48 How organizations should think about responsibility and design 34:21 AI natives and market disruption 36:28 Why humans must lean into AI despite discomfort 41:11 Human trust, cognition, and the need for low-intensity work 45:54 Responsible innovation and human-AI balance 50:27 Jevons’ Paradox and future work demand 54:25 Why HR disruption is coming—and why that can be good 58:15 The four pillars of AI literacy 01:02:05 Sandra’s favorite AI tools and closing thoughts Sandra Loughlin: https://www.linkedin.com/in/sandraloughlin EPAM: https://epam.com For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Show More Show Less
    1 hr and 3 mins