0:00
/

Why bad data didn't matter until now

A conversation with Qlik's Brendan Grady on consequence management in the agentic era

YouTube | Spotify | Apple Podcasts | Amazon Music

The Data Faces Podcast on location with Brendan Grady, EVP and GM of Analytics & AI, Qlik

For 25 years, data quality has been everyone’s problem and nobody’s priority. For some, it was an IT problem, and for others, it was a business problem. But most of the time, fixing it at scale was largely ignored. What would you do if a number in the spreadsheet looked off? You’d fix it and move on with your day. The same with questionable metrics on dashboards. We’ve been able to tuck and hide the cost of bad data in a manual world for a while now. Since the pace of business was slower, there were no real consequences for getting it wrong.

Those ways of old change when you hand autonomy to an AI agent. An agent doesn’t pause to gut-check a suspicious number, it doesn’t really care. It takes the data at face value, makes a decision, feeds that decision into the next step, and keeps going. You might be six or seven steps down the line before anyone realizes the foundation was wrong. And by then, the damage compounds in ways that a quick spreadsheet fix can’t undo.

I sat down with Brendan Grady, EVP and General Manager of Analytics and AI at Qlik, at Qlik Connect 2026 in Orlando to discuss why the stakes around data quality have changed, where enterprise-agentic adoption stands today, and what data professionals should be thinking about.

“In today’s world where there may be an agent running around using said data and getting it wrong, the consequences of getting it wrong are going to be catastrophic.”

Brendan Grady, EVP and GM of Analytics & AI, Qlik

About Brendan Grady

Brendan Grady is EVP and General Manager of the Analytics and AI Business Unit at Qlik, where he leads product management, product design, R&D, and go-to-market strategy for the company’s data integration, quality, and analytics platform. Before Qlik, he held senior GTM roles at IBM, where he led worldwide digital sales for Watson Analytics and managed the Cognos portfolio. He joined Qlik seven years ago after repeatedly losing deals to its analytics engine, and decided to find out why. And well before all of that, he delivered the Sound of Music tour in Salzburg, Austria, over 300 times.

In this episode, we discuss:

- Why data quality was never fixed and why that matters now

- Where enterprise agentic AI adoption actually stands

- Trust scores and the problem with feeding spreadsheets to LLMs

- The shift from dashboards to decision intelligence

- Open standards, MCP, and why there’s no “One Ring to rule them all”

- Advice for data professionals navigating the AI transition

The consequence management problem

Grady framed the data quality conversation in a way I hadn’t heard before. He called it consequence management. For decades, organizations tolerated bad data because the consequences of getting it wrong were manageable. A field was incorrect in a report? Someone caught it, fixed it, and everyone moved on, knowing there would be another fire drill tomorrow. The recovery cost was low enough that nobody prioritized prevention, and if they did, they rarely had the organizational backing to make any meaningful change.

“Is it IT’s job? Is it the business’s job? Is it both, or is it nobody’s job? For most companies, it’s been nobody’s job.”

Brendan Grady, EVP and GM of Analytics & AI, Qlik

BARC’s research confirms this pattern. As Shawn Rogers discussed on the Data Faces Podcast, data quality remains the top challenge for organizations trying to mature their analytics and AI capabilities.[1] That organizational ambiguity persisted because the stakes allowed it. He pointed to real examples. A major airline took a significant hit to its market cap because its sentiment data was wrong and decisions were made on flawed analysis. Two decades ago, a single field in a spreadsheet contributed to a financial crisis that rippled through an entire market. These weren’t hypothetical scenarios. They happened because nobody owned the problem and the systems in place couldn’t detect the errors before they cascaded.

In the agentic era, the failure mode is different. A human looking at a dashboard might notice something feels off and investigate. An agent won’t. It will take the data, reason through it, make a decision, and pass that decision to the next agent in the chain; often without any confidence bounds or trust scores.

The point isn’t that agents are dangerous. The point is that autonomous systems need trusted data underneath them before they’re given the authority to act. Without that bedrock, every step an agent takes amplifies whatever error was baked into the starting point. As practitioners, we know this, why hasn’t this been fixed yet?

I love this write-up, let me subscribe.

“Prior to stage zero”

I asked Brendan where enterprise agentic adoption actually stands. His answer was honest. “What’s prior to stage zero?” he said. “I mean, there are customers that are trying things out there, surely. But from a large-scale production standpoint, we’re in the early days.”

Customers are experimenting with low-risk use cases. They’re testing agents in controlled environments where a mistake won’t damage the business. But production-grade agents making real decisions in real business processes? That’s rare. And the blocker, according to Brendan, isn’t the technology. It’s the data.

Gartner projects that by 2027, 70% of organizations will adopt modern data quality solutions to support AI adoption and digital business initiatives.[2] That projection tells you where the market is today. If 70% will need to adopt these solutions by 2027, most organizations don’t have them yet. The ambition around agentic AI is running well ahead of the data infrastructure required to support it. Shane Murray made a similar argument on the Data Faces Podcast earlier this year, noting that actionable data strategies beat endless planning when it comes to AI readiness.[3]

Brendan also raised a practical question that every data leader should be asking. The LLM landscape is shifting constantly. Six months ago it was OpenAI. Today, Claude is gaining traction. Tomorrow the market may have moved on to something new. His advice was to work with vendors that approach this from an open standards perspective, supporting multiple LLMs rather than forcing a single choice. The technology will keep changing, but the data underneath it is what has to hold steady.

“The internet took 10 years, 20 years, 30 years to get going. We’re a year and a half in.”

Brendan Grady, EVP and GM of Analytics & AI, Qlik

Let me share this with my friends, they’ll love this.

Share

Trust as the missing layer

One of the more revealing moments in our conversation came when Brendan talked about what happens when you feed structured data into an LLM. I’ve experienced this myself. You upload a spreadsheet, ask it to calculate something, and the answer comes back looking polished and confident. The formatting is clean, the language is professional, and unbeknownst to you, the numbers are wrong.

“It’s really pretty, right? The answer is amazing. Looks great. Totally BS. And the next thing you know, you’re showing up to the board with all incorrect numbers.”

Brendan Grady, EVP and GM of Analytics & AI, Qlik

Qlik’s Trust Score for AI is designed to give decision-makers a quantifiable measure of whether their data is valid, fresh, and representative before it reaches an agent or an LLM.[4] Instead of hoping your data is accurate, you can see a score that tells you it’s 90% trustworthy or 80% or something that should give you pause.

The other piece Brendan emphasized was intent detection. When someone asks a question of an LLM, the literal question and the actual intent are often different things. I ran into this recently when I asked an AI assistant to analyze several websites. It came back with a confident analysis, but when I pressed it, it admitted it had never actually visited the sites. Qlik is investing in understanding what the user is really trying to accomplish so the system can route to the right data and the right engine rather than letting an LLM fabricate its way to an answer.

The combination of trust scores and intent detection reflects a broader principle. Before you give an agent the authority to act on data, you need to know that the data is sound and that the system understands what you’re actually asking. Qlik’s track record in this space is long. The company has been named a Leader in the Gartner Magic Quadrant for Augmented Data Quality Solutions for seven consecutive years, most recently in 2026.[5]

“Dashboards are dead. Long live dashboards.”

When Brendan declared that dashboards are dead, I thought I had a scoop and I made sure the audience heard it. He laughed and then walked it back with the nuance that matters. Dashboards as a destination are going away, but the data inside them and the decisions they inform are more important than ever.

Brendan described how his own workflow has changed. He used to ask his analytics tools for information about business performance. Now he asks a different question. “Tell me about my business and what you think I should do.” That shift from information retrieval to decision recommendation is what Qlik means by decision intelligence, and it’s powered by two things working together.

The first is Qlik’s analytics engine, which finds associations and relationships in data that other approaches miss. Instead of running a predefined query to answer a specific question, the engine surfaces connections you didn’t know existed. Brendan called these the unknown unknowns. In an agentic context, that capability becomes even more valuable because it allows agents to explore paths and relationships that a standard SQL query would never surface.

“In the agentic world, we’re serving this up to help agents understand that there’s a relationship here that you need to go explore before you take action. That is extremely powerful.”

Brendan Grady, EVP and GM of Analytics & AI, Qlik

The second is openness. Qlik launched its MCP server in February 2026, implementing the open Model Context Protocol standard to let third-party AI assistants access Qlik’s analytical capabilities with governance built in.[6] “There’s never going to be One Ring to rule them all,” he said. People want to work in the tools they’re comfortable with, whether that’s Claude, Gemini, ChatGPT, or something that doesn’t exist yet. The bet is paying off. Brendan shared that they’re already seeing roughly a 50/50 split between users accessing agentic capabilities through Qlik’s own interface and those coming in through MCP.

“Am I out of a job?”

Brendan closed our conversation with a story that we’ve all encountered. After demoing the ability to build an analytics application through Claude in 30 seconds at Qlik Connect, a customer approached him. This person had built his entire career writing code to create analytics applications across multiple platforms. His question was simple. “Am I out of a job?”

Brendan’s answer was no, but with an important caveat. The job will evolve. His advice to data professionals was to lean into what they already know better than anyone else: the data itself. Become the data product owner. Be the trusted guide as organizations navigate the agentic experience. The people who understand the data well enough to know its quirks and business context will be indispensable as agents take on more routine work.

This tracks with what Brendan’s team has seen internally. Qlik has developers who were already performing well, and AI tools have turned them into 10x contributors. The acceleration is happening at the top end, where strong performers are getting faster and producing better work. A preliminary MIT Media Lab study found that heavy reliance on AI assistants can lead to what researchers called “cognitive debt,” where users outsource critical thinking and lose the ability to recall and synthesize what they’ve produced.[7] Brendan acknowledged this risk directly. He sees his own daughters, 19 and 24, defaulting to LLMs for answers, and he worries about critical thought eroding over time.

“Embrace these new technologies. It’s scary. But your job will evolve. Become that data product owner, become an expert in that data, and be that trusted guide as everybody’s going down the agentic experience.”

Brendan Grady, EVP and GM of Analytics & AI, Qlik

The real opportunity for data professionals is to become the people who make sure agents are working with the right information in the right context. That’s a role no LLM can fill on its own. If you’re not sure where to start, audit the data your team’s AI tools depend on. If you can’t quantify how trustworthy that data is, that’s the first problem to solve.


Listen to the full conversation with Brendan Grady on the Data Faces Podcast.

Based on insights from Brendan Grady, EVP and GM of Analytics & AI at Qlik, featured on the Data Faces Podcast.

Share

Frequently asked questions

What is consequence management in the context of data quality?

Consequence management is the idea that data quality was never prioritized because the consequences of bad data were manageable. In a manual world, a wrong number in a spreadsheet could be corrected before it caused real damage. With AI agents making autonomous decisions across multiple steps, errors compound before anyone detects them. Consequence management explains why the stakes around data quality have shifted from recoverable inconvenience to potential business-level damage.

Where does enterprise adoption of agentic AI stand in 2026?

According to Brendan Grady, EVP of Analytics and AI at Qlik, enterprise agentic adoption is in its earliest stages. Customers are experimenting with low-risk use cases in controlled environments, but production-grade agents making real decisions in real business processes are rare. Data quality is the primary blocker. Gartner projects that by 2027, 70% of organizations will adopt modern data quality solutions to support AI initiatives.

What is Qlik’s Trust Score for AI?

Qlik’s Trust Score for AI is a quantifiable measure of whether data is valid, up to date, and representative before it reaches an AI agent or a large language model. It scores data across dimensions including diversity, timeliness, and accuracy, giving decision-makers visibility into data reliability rather than requiring them to take data quality on faith. Qlik has been named a Leader in the Gartner Magic Quadrant for Augmented Data Quality Solutions for seven consecutive years.

What does “dashboards are dead” mean?

Brendan Grady’s declaration that “dashboards are dead” refers to dashboards as a destination, not the data or insights within them. The traditional model of going to a dashboard to draw your own conclusions is being replaced by AI-powered interfaces that proactively recommend actions. Qlik calls this shift decision intelligence. Grady described his own workflow changing from “give me information about my business” to “tell me about my business and what you think I should do.”

What is the Qlik MCP server?

The Qlik MCP server implements the open Model Context Protocol, allowing third-party AI assistants such as Anthropic Claude, Google Gemini, and ChatGPT to access Qlik’s analytical capabilities, with built-in governance and audit trails. Launched in February 2026, it reflects Qlik’s bet on interoperability over platform lock-in. Grady reported that roughly 50% of users now access Qlik’s agentic capabilities through MCP rather than Qlik’s own interface.

What should data professionals do to prepare for the agentic AI era?

Brendan Grady advises data professionals to lean into what they already know best: the data itself. His recommendation is to become data product owners who serve as trusted guides as organizations adopt agentic AI. The people who understand data quality, business context, and organizational nuance will be indispensable because these capabilities are not ones AI agents can replicate on their own.

Podcast highlights

- [0:00] Introduction and welcome at Qlik Connect 2026

- [1:14] Brendan’s first job: Sound of Music tour guide in Salzburg

- [2:04] Lessons learned from the early analytics era

- [3:32] Why data quality has never been fixed

- [4:46] Consequence management in the agentic era

- [6:08] Where enterprise agentic adoption actually stands

- [7:46] Future-proofing against LLM shifts

- [8:24] The analytics engine and unknown unknowns

- [10:29] Structured vs. unstructured data convergence

- [12:04] Hallucinations and the trust problem

- [15:30] Decision intelligence and “dashboards are dead”

- [18:05] Brain outsourcing and the MIT cognitive debt study

- [21:57] MCP server and open standards

- [23:54] Key themes for Qlik in 2026: trust, context, flexibility

- [26:12] Advice for data professionals

- [28:15] Does AI expand the aperture for who can participate in analytics?

About David Sweenor

David Sweenor is the founder and host of the Data Faces Podcast, where he talks with the people who are making data, analytics, AI, and marketing work in the real world. He is also the founder of TinyTechGuides and a recognized top 25 analytics thought leader and international speaker who specializes in practical business applications of artificial intelligence and advanced analytics.

With over 25 years of hands-on experience implementing AI and analytics solutions, David has supported organizations including Alation, Alteryx, TIBCO, SAS, IBM, Dell, and Quest. His work spans marketing leadership, analytics implementation, and specialized expertise in AI, machine learning, data science, IoT, and business intelligence. David holds several patents and consistently delivers insights that bridge technical capabilities with business value.

Books

- Artificial Intelligence: An Executive Guide to Make AI Work for Your Business

- Generative AI Business Applications: An Executive Guide with Real-Life Examples and Case Studies

- The Generative AI Practitioner’s Guide: How to Apply LLM Patterns for Enterprise Applications

- The CIO’s Guide to Adopting Generative AI: Five Keys to Success

- Modern B2B Marketing: A Practitioner’s Guide to Marketing Excellence

- The PMM’s Prompt Playbook: Mastering Generative AI for B2B Marketing Success

Follow David on Twitter @DavidSweenor and connect with him on LinkedIn.


[1]Sweenor, David. “Beyond the AI Hype: What 20% of Companies Get Right.” TinyTechGuides, February 11, 2025. https://tinytechguides.com/blog/beyond-the-ai-hype-what-20-of-companies-get-right/

[2]Gartner. “Lack of AI-Ready Data Puts AI Projects at Risk.” Gartner Newsroom, February 26, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk

[3]Sweenor, David. “From ‘AI-Ready’ to AI Reality: Why Actionable Data Strategies Beat Endless Planning.” TinyTechGuides, June 3, 2025. https://tinytechguides.com/blog/from-ai-ready-to-ai-reality-shane-murray-on-data-trust-and-why-action-beats-planning/

[4]Qlik. “Qlik Releases Trust Score for AI in Qlik Talend Cloud.” Qlik Press Release. https://www.qlik.com/us/news/company/press-room/press-releases/qlik-releases-trust-score-for-ai-in-qlik-talend-cloud

[5]Qlik. “Qlik Named a Leader in the 2026 Gartner Magic Quadrant for Augmented Data Quality Solutions.” Qlik Press Release, 2026. https://www.qlik.com/us/news/company/press-room/press-releases/qlik-named-a-leader-in-the-2026-gartner-magic-quadrant-for-augmented-data-quality-solutions

[6]Qlik. “Qlik Brings Agentic Analytics to General Availability and Launches MCP Server for Third-Party Assistants.” Qlik Press Release, February 10, 2026. https://www.qlik.com/us/news/company/press-room/press-releases/qlik-brings-agentic-analytics-to-general-availability-and-launches-mcp-server-for-third-party-assistants

[7]MIT Media Lab. “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” MIT Media Lab, 2025. https://www.media.mit.edu/publications/your-brain-on-chatgpt/

Discussion about this video

User's avatar

Ready for more?