The AI Briefing

85% of AI leaders cite data quality as their biggest challenge, yet most initiatives launch without addressing foundational data problems. Tom Barber reveals the uncomfortable conversation your AI team is avoiding.


The Data Quality Crisis Killing 85% of AI Projects

Key Statistics

  • 85% of AI leaders cite data quality as their most significant challenge (KPMG 2025 AI Quarterly Poll)
  • 77% of organizations lack essential data and AI security practices (Accenture State of Cybersecurity Resilience 2025)
  • 72% of CEOs view proprietary data as key to Gen AI value (IBM 2025 CEO Study)
  • 50% of CEOs acknowledge significant data challenges from rushed investments
  • 30% of Gen AI projects predicted to be abandoned after proof of concept (Gartner)

Three Critical Questions for Your AI Initiative

1. Single Source of Truth

  • Do we have unified data for AI models to consume?
  • Are AI initiatives using centralized data warehouses or convenient silos?
  • How do conflicting data versions affect AI outputs?

2. Data Quality Ownership

  • Who owns data quality in our organization?
  • Do they have authority to block deployments?
  • Was data quality specifically signed off on your last AI launch?

3. Data Lineage and Traceability

  • Can we trace AI decisions back to source data?
  • How do we debug AI failures without lineage?
  • Are we prepared for EU AI Act requirements (phased in February 2025)?

The Real Cost of Poor Data Governance

  • Organizations skip governance → hit problems at scale → abandon initiatives → repeat cycle
  • Tech debt compounds from rushed implementations
  • Strong data foundations enable faster AI scaling

Action Items for This Week

  1. Ask for data quality scores on your highest priority AI initiative
  2. Identify who owns data quality decisions and their authority level
  3. Test traceability: can you track wrong outputs to source data?
  4. Ensure data governance is a budget line item, not buried assumption

Key Frameworks Mentioned

  • Accenture: Data security, lineage, quality, and compliance
  • PwC: Board-level data governance priority
  • KPMG: Integrated AI and data governance under single umbrella

Research Sources

  • KPMG 2025 AI Quarterly Poll Survey
  • Accenture State of Cybersecurity Resilience 2025
  • IBM 2025 CEO Study
  • Drexel University and Precisely Study
  • PwC Research on AI Data Governance
  • Gartner AI Project Predictions
  • Forrester IT Landscape Analysis
  • EU AI Act Requirements

Chapters

  • 0:00 - Introduction: The Data Quality Crisis
  • 0:29 - Why 85% of AI Leaders Struggle with Data Quality
  • 2:12 - How AI Makes Data Problems Worse
  • 2:56 - Three Critical Questions Every Organization Must Ask
  • 4:45 - The Real Cost of Skipping Data Governance
  • 5:34 - Reframing Data Governance as an Accelerant
  • 6:16 - What Good Data Governance Looks Like
  • 7:33 - Action Steps You Can Take This Week

What is The AI Briefing?

The AI Briefing is your 5-minute daily intelligence report on AI in the workplace. Designed for busy corporate leaders, we distill the latest news, emerging agentic tools, and strategic insights into a quick, actionable briefing. No fluff, no jargon overload—just the AI knowledge you need to lead confidently in an automated world.

Hi folks, welcome to today's AI briefing.

I'm a slightly ill Tom Barber and I'm coming to you today from Warsaw in Poland.

And today we're going to talk about the data quality conversation your AI team doesn't
really want you to have.

So welcome to the AI briefing.

And here is where we dig into small topics and soundbites that help cut through the noise
as to what is important and what is not.

in year of 2026.

So let's get into it.

Okay, KPMG's 2025 AI quarterly poll survey found that 85 % of leaders cite data quality as
their most significant challenge in AI strategies, which may not come as a huge surprise

to you all.

Not compute power, not talent, not budget, but data quality.

Yet most AI initiatives launch without addressing the foundational data problems that will
eventually undermine them.

This is a conversation that gets skipped because it's unsexy, it's expensive, and nobody
wants to be the person who slows down the AI rope.

Of course, looking at the scale of the problem, Accenture's State of the Cybersecurity
Resilience 2025 report said 77 % of organizations lack the essential data and AI security

practices needed to safeguard critical business models, data pipelines, and cloud
infrastructure.

IBM's 2025 CEO study found that 72 % of CEOs view their organization's proprietary data as
key to unlocking gen AI value.

probably not a huge surprise.

But 50 % of those acknowledged that the pace of recent investments has left the
organization with a significant data challenge.

Drexel University and Precisely study found that leaders lack confidence that their
enterprise data is ready to make the leap from human to AI, human first to AI only

decisioning.

And that translates into executives who know that the data is fuel, but they're not
confident that

fuel is clean.

But this problem has been around since the dawn of time.

It's just we choose to ignore it for reasons I do not understand.

So why, of course, AI makes data problems worse and not better.

Traditional software fails predictably.

Bad input, error message, process stops.

AI quite often fails quietly.

Bad input, confident sounding, wrong answer.

Decisions made on floor foundations.

PwC research notes that AI raises questions about data lineage, sources and usage rights
that traditional systems have never had to answer.

The complexity compounds, of course, with AI systems often using and processing more
diverse datasets than traditional systems, making governance exponentially harder.

Poor data quality doesn't just produce wrong answers, it can embed bias, create compliance
violations and erode trust in AI outputs across the organization.

So this lets me ask a few different questions.

Question number one, do we have a single source of truth for data for our AI models to
consume?

Every organization should ask this question and most organizations have scattered data
across systems that don't talk to each other.

AI models trained on conflicting versions of the same data will produce conflicting
outputs.

The question isn't whether you have a data warehouse, it's whether your AI initiatives are
actually using it.

or pulling from whatever source was convenient at the time.

At-Lance research identifies siloed repositories as key governance failure, different
departments creating conflicting AI policies and data access controls without

coordination.

Question number two, who owns data quality in our organization and do they have the
authority to block a deployment?

Because data governance often falls into a gap between IT, compliance, data science and
business leadership.

And again, identifies unclear ownership as a core challenge, data stewardship
responsibilities scattered across teams without accountability.

So if your data quality owner can raise concerns but not stop a launch, you don't really
have data governance.

just have someone.

So the test is when your last AI initiative went live, did anyone sign off specifically on
data quality or was it just you?

Question number three is when our AI system gives us our answer, can we trace it back to
the data that caused it?

This is data lineage and the ability to track data where it came from, how it was
transformed and how it ended up being in your AI model.

Without lineage, debugging AI failures becomes guesswork.

Regulatory requirements are increasingly demanding of this traceability.

The EU AI Act began phasing in obligations from February 2025.

If you can't explain why your AI made a decision, you can't defend it to regulators,
customers, or your own board.

So the real cost of skipping this work, Gartner predicts it to be at least 30 % of Gen.AI
projects will be abandoned after proof of concept by the end of 2025, obviously we're now

in 2026, so we'll find out.

Poor data quality is listed as primary cause alongside escalating costs and unclear
business value.

Foster has warned that impatience with AI ROI could lead to premature cutbacks potentially
hindering long-term benefits.

And so the pattern is that organizations skip data governance to move faster, hit problems
at scale, abandon the initiative, and then start over with a new pilot that also skips

data governance to move faster.

cycle.

The tech debt from rushed AI implementations, count compounds, Forrester highlights
increasingly intricate IT landscapes is a major barrier.

So you have to be able to reframe data governance as an accelerant.

The conventional view is data governance slows things down, adds bureaucracy and creates
friction.

The reality of course is organizations with slot strong data foundations scale AI faster
because they don't hit the same walls.

KPMG research positions modern data governance as integrating AI and data governance under
a single umbrella, enable complete transparency, enforceable policies, and eliminating

duplicate datasets.

PWC's guidance is to elevate data governance to a board-level priority, making it central
to your AI strategy and not an afterthought.

New organizations treating data governance as strategic investment rather than a
compliance checkbox of the ones whose pilots actually become product.

So what does Google look like?

Outland's framework identifies key components, data security, data lineage, data quality,
and compliance.

PwC recommends that people, rationalize data sources to reduce silos, establish a single
source of truth for high-value data assets and standardize data definitions and access

protocols across business units.

The goal isn't

perfect data.

It's knowing when your data is imperfect and governing AI use accordingly.

Some AI use control, tolerate lower data quality, others cannot.

The governance framework should distinguish between them.

So the conversation, go back to the start of this podcast, that your AI team doesn't want
you to have.

AI teams are under pressure to show progress, demonstrate innovation and hit deployment
milestones.

Raising data quality concerns feels like the person who slows everything down.

But the MIT research on pilot failures found data quality consistently among the root
causes of initiatives that stalled.

The conversation that feels uncomfortable now is far less uncomfortable than explaining to
the board why a high profile AI initiative failed 18 months in.

Executives can make space for this conversation by asking three questions directly and
making clear that honest answers are valued at the more optimistic timelines.

So what can you do about it this week?

Ask your AI lead or CTO directly what is the data quality score for the training data in
our highest priority AI initiative.

And if there isn't a clear answer, that's your answer.

Identify who in your organization owns data quality decisions and whether that person has
the authority and resources to enforce standards.

Review your most advanced AI pilot and ask if this produces a wrong output next month.

can we trace it back to the specific data that caused it?

And if you're planning AI investments for the next budget cycle, ensure data governance is
a line item, not an assumption buried in AI project budget.

So closing thought, the organization successfully scaling AI aren't the ones with the most
sophisticated models, they're the ones who did the undamaged work on the data first.

Data quality isn't a technical problem to delegate, it's a leadership question about
whether your organization is building.

AI on a solid foundation or on sand.

The 85 % of leaders citing data quality as their top challenge know this matters.

The question is whether knowing translates into action before the next pilot stalls.

I have been Tom Barber.

Hopefully this has been useful for you.

This is the AI Briefing.

Make sure you subscribe to this podcast for more.

Hopefully useful insight.

you would like to know something specific, please do get in touch.

I am happy to address whatever is useful for people.

For now, thank you for tuning in.

I'll be back tomorrow with some more AI briefing content right now.