• How To Scale AI in Digital Commerce Effectively
    Jan 14 2026

    Digital commerce teams rarely lack ideas. Most understand how AI, data, and personalisation could improve customer experiences. The problem, as explored in this episode of Don’t Panic, It’s Just Data, is turning those ideas into something that works at scale, in real time, and without slowing the business down.

    Hosted by Dana Gardner, Principal Analyst at Interarbor Solutions, the discussion brings together Jürgen Obermann, Senior GTM Leader EMEA and Piotr Kobziakowski, Senior Principal Solutions Architect from Vespa.ai. Rather than focusing on hype, the conversation centres on the everyday realities of modern e-commerce systems and why progress often feels harder than it should.

    When AI Meets Legacy Digital Commerce

    AI introduces new expectations around speed, relevance, and adaptability. As a result, many digital commerce platforms are built on foundations designed for a different era. Years of development have resulted in fragmented environments, often based on microservices that once provided flexibility but now introduce complexity.

    As Jürgen explains, even small changes can trigger long delivery cycles. Engineering teams may need months to safely update systems, not because the ideas are difficult, but because the infrastructure has become fragile.

    Search and Personalisation Are Still Disconnected

    Search is where most e-commerce journeys begin, yet many platforms still rely on keyword-focused approaches that struggle to interpret intent. Customers expect results that reflect who they are, what they want, and why they’re searching. Delivering meaningful personalisation requires systems that combine signals, context, and ranking logic in real time. Without that, experiences remain generic even when data is available.

    Architecture Becomes the Bottleneck

    The conversation then turns to architecture. Traditional search stacks, particularly Lucene-based systems, often hit performance limits when vector operations and advanced ranking are introduced. These capabilities tend to be bolted on rather than designed into the core. Piotr highlights a deeper issue, which is fragmentation. Search, ranking, recommendation, feature stores, and inference engines often live in separate systems. Each integration adds latency, duplicates data, and slows innovation.

    A More Grounded Path Forward

    This episode of Don’t Panic, It’s Just Data offers a calm, practical view of AI in digital commerce. Progress comes not from adding more complexity, but from simplifying how systems work together. When search, personalisation, and recommendation are designed as part of a cohesive whole, digital commerce platforms become easier to evolve and better equipped to serve both customers and the business.

    For more insights into modern search architectures and AI-native commerce platforms, visit Vespa.ai.

    Takeaways
    • Many teams see the potential...
    Show More Show Less
    25 mins
  • The Modern CFO is the Product Owner of Data
    Jan 13 2026

    In the recent episode of the Don’t Panic It’s Just Data podcast, Shubhangi Dua, Podcast Producer and B2B Tech journalist at EM360Tech, reports on the podcast shot live in London. Guest speakers, Pavel Dolezal, the CEO at Keboola, sit down with Vineta Bajaj, Group CFO, Holland & Barrett.

    They get specific about how modern finance leaders move faster: start with one governed source of truth, then layer automation, and only then AI. They explore how the CFO role is evolving. From reporting numbers to also owning the non-financial “whys” behind them.

    In the age of the AI boom, that shift turns every CFO into a product owner of data. But as Pavel Doležal puts it, without a clean, connected foundation, AI is just noise.

    According to Vineta Bajaj, Group CFO of Holland & Barrett, the role of the CFO has fundamentally changed. Today’s CFO must act as a product owner for data, not just owning the numbers but also determining how data is defined, structured, and used throughout the business.

    Finance and Data: A Complete Product

    Drawing on her experience with Ocado Group, Rohlik Group (one of the fastest online grocery businesses in the world), and now Holland & Barrett, Bajaj points out that financial problems remain persistent across organisations.

    Issues such as slow month-end closes, duplicated processes, delayed reporting, and limited decision-making speed are still common. These challenges are even greater in complex businesses that operate across multiple entities and countries. Differing charts of accounts, outsourced finance teams, and fragmented systems create added friction.

    Bajaj stresses the answer isn’t "add another tool". CFOs should treat finance and data as a complete product, one that serves the business as its customer. This requires understanding finance processes, clearly defining financial and non-financial data, and prioritising what has the greatest impact on the business.

    The Holland & Barrett CFO further emphasises that CFOs cannot pass this responsibility off to IT or BI teams. When data ownership is outside finance, it becomes someone else’s problem. However, when finance takes ownership of master data and its definitions while working closely with commercial and operational teams, it creates a single source of truth that the entire organisation can trust.

    Also Watch: The Real Future of Data Isn’t AI — It’s Contextual Automation

    How to Build the Foundation for Real-Time Financial Intelligence & AI

    Analytics, automation, and AI only work if the foundations are solid. Before adding AI assistants or real-time dashboards, CFOs must ensure that finance processes are clean, standardised, and automated. Poorly coded purchase orders, late journal entries, and inconsistent definitions can undermine even the most advanced technology.

    At Holland & Barrett, this perspective led Bajaj to create a dedicated data function within finance. It ensures accountability for master data, definitions, and governance. The aim is not just to speed up reporting, but to gain deeper insights by linking financial outcomes with non-financial factors such as foot traffic, pricing, customer behaviour, and external influences like weather.

    This integrated viewpoint allows finance teams to go beyond explaining variances and focus on the...

    Show More Show Less
    23 mins
  • Responsible AI Starts with Responsible Data: Building Trust at Scale
    Dec 11 2025

    We live in a world where technology moves faster than most organisations can keep up. Every boardroom conversation, every team meeting, even casual watercooler chats now include discussions about AI. But here’s the truth: AI isn’t magic. Its promise is only as strong as the data that powers it. Without trust in your data, AI projects will be built on shaky ground.

    In this episode of Don’t Panic, It’s Just Data podcast, Amy Horowitz, Group Vice President of Solution Specialist Sales and Business Development at Informatica, joins moderator Kevin Petrie, VP of Research at BARC, to tackle one of the most pressing topics in enterprise technology today: the role of trusted data in driving responsible AI. Their discussion goes beyond buzzwords to focus on actionable insights for organisations aiming to scale AI with confidence.

    Why Responsible AI Begins with Data

    Amy opens the conversation with a simple but powerful observation: “No longer is it okay to just have okay data.” This sets the stage for understanding that AI’s potential is only as strong as the data that feeds it. Responsible AI isn’t just about implementing the latest algorithms; it’s about embedding ethical and governance principles into every stage of AI development, starting with data quality.

    Kevin and Amy emphasise that organisations must look at data not as a byproduct, but as a foundational asset. Without reliable, well-governed data, even the most advanced AI initiatives risk delivering inaccurate, biased, or ineffective outcomes.

    Defining Responsible AI and Data Governance

    Responsible AI is more than compliance or policy checkboxes. As Amy explains, it is a framework of principles that guide the design, development, deployment, and use of AI. At its core, it is about building trust, ensuring AI systems empower organisations and stakeholders while minimising unintended consequences. Responsible data governance is the practical arm of responsible AI. It involves establishing policies, controls, and processes to ensure that data is accurate, complete, consistent, and auditable.

    Prioritise Data for Responsible AI

    The takeaway from this episode is clear and that is responsible AI starts with responsible data. For organisations looking to harness AI effectively:

    1. Invest in data quality and governance — it is the foundation of all AI initiatives.
    2. Embed ethical and legal principles in every stage of AI development.
    3. Enable collaboration across teams to ensure transparency, accountability, and usability.
    4. Start small, prove value, and scale — responsible AI is built step by step.

    Amy Horowitz’s insight resonates beyond the tech team: “Everyone’s ready for AI — except their data.” It’s a reminder that AI success begins not with the algorithms, but with the trustworthiness and governance of the data powering them.

    For more insights, visit Informatica.

    Takeaways
    • AI is only as good as its data inputs.
    • Data quality has become the number one obstacle to AI success.
    • Organisations must start small and find use cases for data governance.
    • Hallucinations in AI models highlight the need for vigilant
    Show More Show Less
    26 mins
  • The Missing Piece: How Data and AI Impact Management Unlocks Business Value
    Dec 11 2025

    “What is the true value of our data and AI initiatives?”

    Too often, we drive all our energy into tools, processes, and outputs, but forget to ask ourselves how what we build actually makes a difference. For enterprises, this means looking beyond AI models and dashboards to see how our data drives real, measurable impact. Understanding the difference between output and outcome is what separates activity from transformation.

    In this episode of Don’t Panic, it's Just Data, host Doug Laney and Nadiem von Heydebrand, CEO and Co-founder of Mindfuel, explore how organisations can turn data and AI efforts into actionable business outcomes. They discuss the concept of the “value layer”, a framework connecting data initiatives to business needs, emphasising the importance of understanding business problems before developing solutions.

    Nadiem stresses that prioritising initiatives and fostering strong collaboration between business and data teams are critical to unlocking maximum value from data and AI efforts.

    Why Data and AI Impact Management Matters

    Many organisations are investing heavily in data and AI, but turning these investments into real business value remains a challenge. This is because a critical gap exists between technical execution and business outcomes. Data and AI teams work on initiatives without first clarifying what business problems they're solving or how success will be measured.

    Data and AI Impact Management bridges this gap by establishing the “value layer" between business strategy and technical platforms. This approach starts with structured demand management for use cases, enables systematic prioritisation based on actual value potential, and tracks initiatives throughout their lifecycle to ensure they deliver impact against business goals. This shift, from building solutions in search of problems to solving qualified business problems with purpose-built solutions, transforms data and AI teams from technical support functions into strategic partners who deliver value, stronger strategic alignment, and lasting competitive advantage.

    Nadiem says, “Applying a product mindset within data initiatives is key, and it's the foundational effort to be able to drive value.”

    He also notes that not every use case delivers direct financial impact, and the value layer helps clarify demand, manage use cases effectively, and uncover each initiative’s business value

    For more insights and solutions, visit Mindfuel

    Takeaways
    • Organisations struggle to connect data initiatives to business outcomes.
    • The value layer is essential for linking data to business demands.
    • Understanding the actual business problem is crucial for success.
    • Value management encompasses the entire lifecycle of initiatives.
    • A product mindset helps focus on outcomes rather than outputs.
    • Not all data use cases have direct dollar values.
    • Data and AI impact management creates transparency for data teams.
    • Establishing a product mindset is key for data products.
    • Connecting processes to the operating model enhances effectiveness.
    • Collaboration between business and data teams is vital for unlocking value.

    Chapters

    00:31...

    Show More Show Less
    23 mins
  • The AI-Ready Data Core: Creating the Foundation for Intelligent Systems
    Dec 9 2025

    As AI becomes a central pillar of business decision-making, enterprises face a new challenge, and that is making their data AI-ready. It’s no longer enough to collect and digitise information. For organisations, data must be structured, contextualised, discoverable, and usable—both by humans and intelligent systems.

    AI can only deliver if your data is truly ready, but most enterprises are drowning in fragmented, incomplete, or slow-to-update data. In this episode of Don't Panic, It's Just Data, host Doug Laney and Sushant Rai, Vice President of Product of AI and Data Strategy at Relito, explore how modern data unification strategies are changing enterprises, enabling AI to deliver faster, more reliable insights. They focus on the shift from traditional Master Data Management (MDM) to next-generation AI-ready data cores, uncovering the risks of fragmented data and the strategies to overcome them.

    Why AI-Ready Data Matters

    AI, especially large language models (LLMs), is changing how people interact with data. Analysts, executives, and frontline teams now expect natural language queries and instant, actionable insights.

    Sushant explains:

    "AI performs at its best when it has full context, empowered with the right data. This allows AI agents to make decisions and take actions on behalf of your business."

    When you embed intelligence into your data layer, AI can help you manage and scale your data without drowning your teams in manual work. This will only work if your data is structured, clean, governed, and constantly updated, everything that makes it truly AI-ready.

    The Data Scale Challenge

    The volume of data being turned over daily is staggering.

    As Sushant notes:

    "The amount of data getting generated every single day is so massive that there’s no way to keep up without AI. Even the largest organizations, with massive data stewardship teams, can’t catch up manually."

    This gap is driving the change in the modern data platforms, where AI automates stewardship, enriches data continuously, detects anomalies, and maintains quality in real time.

    Want to learn more about modern data unification and AI-ready platforms? Visit Reltio.com for insights, resources, and case studies.

    Takeaways
    • Data unification provides a trusted, real-time view of key business elements.
    • Organizations must balance speed and trust in data management.
    • Classic MDM is evolving into modern data unification platforms.
    • Real-time data access is crucial for AI and analytics.
    • AI can enhance data quality and governance processes.
    • Successful data initiatives require clear business outcomes and ownership.
    • Data unification should be viewed as a business platform, not just an IT project.
    • AI agents will play a significant role in automating data governance.
    • Organizations need to focus on both structured and unstructured data.
    • The future of data management involves continuous unification and...
    Show More Show Less
    26 mins
  • From Data Steward to AI Strategist: Redefining the Role of the CDO in the Agentic Era
    Nov 18 2025

    While the role of a chief data officers (CDOs) was traditionally focused on regulatory compliance, it has now expanded to empowering the consistent and effective use of data across organizations to improve business outcomes. One of the most effective ways for CDOs to demonstrate their value is by developing a data strategy that is closely aligned with business goals, processes, and outcomes.

    In the latest episode of Tech Transformed, host Kevin Petrie, VP of Research at BARC, speaks with Brett Roscoe, Senior Vice President and GM of Cloud Data Governance and Cloud Ops at Informatica, about the evolving role of CDOs. Their conversation explores how CDOs are transitioning from data stewards to strategic leaders, the importance of data governance, and the challenges of managing unstructured data.

    The Role of the CDO in the Agentic Era

    As Roscoe notes, “CDOs are now pivotal in AI strategy,” reflecting how the role has grown from compliance oversight to guiding enterprise initiatives that directly support organizational goals.

    In this day and age, CDOs are tasked with ensuring that data is both accessible and reliable, providing a foundation for informed decision-making across business units. This includes establishing policies for data quality, access, and governance, which Roscoe highlights as essential: data governance is foundational for AI.” At the same time, unstructured data ranging from documents and emails to multimedia adds complexity that requires careful management to make it useful while minimizing risk. Unstructured data presents challenges,” he adds, emphasizing the need for structured oversight to fully leverage these assets.

    AI Strategy

    Although technology and analytics are evolving rapidly, the CDO’s role in aligning data with strategic initiatives is critical. By connecting data assets to business processes, CDOs help ensure that initiatives are informed by reliable, well-governed information and can deliver measurable results.

    For anyone looking to understand the evolving responsibilities of CDOs, the importance of governance, and strategies for handling unstructured data, this episode of Tech Transformed provides a detailed and practical discussion.

    For more insights, follow Informatica:

    • X: @informatica
    • Instagram: @informaticacorp
    • Facebook: https://www.facebook.com/InformaticaLLC/
    • LinkedIn: https://www.linkedin.com/company/informatica/

    Takeaways
    • CDOs are now central to shaping AI strategies and driving business growth.
    • Robust data governance is crucial for the successful deployment of AI technologies.
    • Unstructured data presents unique challenges and opportunities for AI development.
    • A balance between centralized governance and federated operations is essential.
    • Securing executive...
    Show More Show Less
    30 mins
  • Is Your Financial Reporting Ready for the Future?
    Nov 6 2025

    The challenge all organisations, big and small, face is answering and implementing solutions to solve this key question: How can finance and accounting teams work faster, smarter and more accurately?

    In the recent episode of the Don’t Panic It’s Just Data podcast, host Scott Taylor, The Data Whisperer and Principal Consultant at MetaMeta Consulting, speaks with Kevin Gibson, CPA and Principal Solutions Engineer at insightsoftware. They talk about the constantly changing nature of financial reporting.

    Additionally, they discuss the pros and cons of modern financial reporting and the importance of connecting financial data with familiar tools like Excel. The conversation also touches on the future of financial reporting technology and the need for organisations to adapt to changing data access needs.

    Uncertainty in a Data-Driven World

    “With all this uncertainty, companies are being asked to look at their data in different ways. They want to pivot it, slice it, and dice it,” Gibson tells Taylor, encapsulating the theme of this episode. “They’re being told to do more with the data — what does it mean, how do we read it, how do we understand it, how do we analyse it?”

    The issue is that, as enterprises invest in digital transformation, finance teams struggle most with limited access to the data they need to support their analysis.

    “The ideal state,” Gibson adds, “would be: I can get what I want, when I want, and how I want it — without asking questions. But let’s be honest — that doesn’t exist today.”

    However, the good news is the data exists, Gibson says. The ugly part is that organisations can’t get to it. Many of the data accessibility issues have been attributed to cloud migration.

    “When you move your data to the cloud, you think: it’s cheaper, it’s more secure, it’s easier to maintain. But here’s the problem: you don’t control it anymore. Some cloud providers make access difficult or costly. So finance teams feel stuck,” he explains.

    Also Watch: Stop Fighting Excel: How to Turn Your Spreadsheets into a Real-Time Reporting Powerhouse?

    Real-Time Access on Excel

    For decades, finance professionals have relied on Excel, which Gibson refers to as the “largest data warehouse in the world.” “There are 1.1 billion users of Excel today,” he says. “And let’s be honest, I haven’t met an accountant yet who says they hate it.”

    Finance prospers in Excel, but IT often views it as a risk. This leads to a constant back-and-forth between usability and control. Gibson believes that the solution is to equip both sides- finance and IT with real-time, governed data inside Excel.

    That’s where insightsoftware comes in. “We can connect directly to these systems and give finance teams back their real-time access — not just to pieces of data, but all of it,” says Gibson. “Literally every piece of data can be accessed.”

    With tools like Spreadsheet Server, finance professionals can work in Excel — their “comfort food,” as Gibson calls it — while drawing directly from live ERP data in the cloud.

    “We give them insight — that’s what our software does. It gives them visibility into their data. Excel isn’t going away, and our job is to make it work even better.”

    To learn more, watch or listen to the podcast...

    Show More Show Less
    20 mins
  • How enterprises can enable the Agentic AI Lakehouse on Apache Iceberg
    Oct 29 2025

    "A flaw of warehouses is that you need to move all your data into them so you can keep it going, and for a lot of organisations that's a big hassle,” says Will Martin, EMEA Evangelist at Dremio. “It can take a long time, it can be expensive, and you ultimately can end up ripping up processes that are there."

    In this episode of the Don’t Panic It’s Just Data podcast, recorded live at Big Data LDN (BDL) 2025, Will Martin, EMEA Evangelist at Dremio, joins Shubhangi Dua, Podcast Host and Tech Journalist at EM360Tech. They talk about how enterprises can enable the Agentic AI Lakehouse on Apache Iceberg and why query performance is critical for efficient data analysis.

    "If you have a data silo, it exists for a reason—something's feeding information to it. You usually have other processes feeding off of it. So if you shift all that to a warehouse, it disrupts a lot of your business," Martin tells Dua.

    This is where a lakehouse comes into play. Organisations can federate their access through a lakehouse data approach. They can centralise access to the respective organisation’s lakehouse while keeping their data in its original location. Such a system helps people get started quickly.

    In terms of data quality, if you access everything from one location, even with separate data silos, you can see all your data. This visibility allows you to identify issues, address them, and enhance your data quality. That’s beneficial for AI, too, Martin explains.

    Lakehouse Key to AI Infrastructure?

    Lakehouse has been recognised for unifying and simplifying governance. An imperative feature of a lakehouse is the data catalogue, which helps an organisation browse and find information. It also secures access and manages permissions.

    "You can access in one place, but you can do all your security and permissions in one place rather than all these individual systems, which is great if you work in IT,” reflects Martin. "There are some drawbacks to lakehouses. So, a big component of a lakehouse is metadata. It can be quite big, and it needs managing. Certain companies and vendors are trying to deal with that."

    With AI and AI agents, it’s become even harder to optimise analytics on a lakehouse. However, this has been improved as technical barriers are disappearing. Martin explains that anyone can prompt a question; for instance, an enterprise CEO could ask questions about the data and demand justifications directly.

    In the past, a request would have to be submitted, and then a data scientist or engineer would create the dataset and hand it over. Now, engineers' roles have changed to focus on better optimisation. They help queries run smoothly and ensure tables are efficient. Agents cannot assist with that.

    Also Listen: Dremio: The State of the Data Lakehouse

    Optimise Lakehouse

    Vendors such as

    Show More Show Less
    15 mins