0:00
/
0:00

The most dangerous AI agent is the one that’s still running

Dataiku’s Conor Jensen on agent management, vibe coding for data, and getting AI from pilot to production

Listen now on YouTube

The Data Faces Podcast – On Location with Conor Jensen, Global Field CDAO, Dataiku

I spend most of my time consulting with organizations that are trying to figure out what to do with AI, and teaching is a big part of that work. I’ve also conducted more than 35 interviews on the Data Faces Podcast with data leaders, practitioners, and technology executives. The question I hear in every engagement and nearly every episode is the same: how do I know the output is right? When a chatbot gives a questionable answer, someone catches it and moves on. An autonomous agent, on the other hand, might already be three decisions downstream before anyone notices the answer was wrong.

At the Gartner Data & Analytics Summit in Orlando, I sat down with Conor Jensen for an on-location episode of the Data Faces Podcast. Conor is the Global Field CDO at Dataiku, a data science and machine learning platform used by enterprise organizations to build, deploy, and manage AI projects. It’s a role shaped by an unusual path. He purchased Dataiku as a customer about ten years ago, spent seven years on the other side of the table, and now helps organizations avoid the mistakes he already made. He’d just come off Dataiku’s biggest product launch in the company’s 13-year history, and one observation from our conversation captured exactly what I’ve been hearing from clients.

“A far more dangerous thing than an agent that breaks is an agent that’s still functioning and giving the wrong answers.”Conor Jensen, Global Field CDO, Dataiku

According to Gartner, only 6% of organizations have AI agents in production today, while 53% are still in exploration mode.[1] The organizations racing to build agents have largely skipped the question of whether the ones they already have are performing.

About Conor Jensen

- Conor Jensen is the Global Field CDO at Dataiku. He purchased Dataiku as a customer about ten years ago, joined the company seven years later, and now helps organizations develop AI strategy and operational plans to get the most out of the platform. Before Dataiku, he worked as a data scientist and analytics leader.

- Key topics discussed: Dataiku CoBuild and vibe coding for data pipelines, Reasoning Systems for multi-step autonomous decisions, the Agent Management Platform for cross-platform observability, getting AI from pilot to production, and why perfect data is never coming

I love perspectives from Global Field CDAOs, I better subscribe.

Everyone’s building agents, nobody’s solved production

I asked Conor why so many AI projects stall between prototype and production. He didn’t point to a single bottleneck. He described a pile of them that keeps growing.

“We haven’t solved any of that yet as an industry. We just keep putting more in the backpack.”Conor Jensen, Global Field CDO, Dataiku

MLOps was supposed to get machine learning models into production. Then came LLMOps for large language models. Now the industry is talking about AgentOps. Each layer adds new complexity without resolving the one that came before it. Conor sees three barriers that keep organizations stuck. Deployment architecture is one, where something that works on a laptop or in a dev environment falls apart on the way to production. Organizational dynamics are another, including governance, trust, and change management, which he considers harder than any technical challenge. And then there’s data readiness, where teams wait for perfect data that will never arrive.

Gartner’s research reinforces how high the stakes are. Rita Sallam estimates that 70% of agentic AI use cases will fail to deliver expected value due to underinvestment in necessary foundations.[2] Data availability and quality remain the number one barrier to AI implementation, cited by 30% of data management leaders.[3] Gartner analyst Sarah Turkaly reinforced the point at the summit: “Data governance will be the single point of failure for organizations’ AI ambitions.”[4]

Dataiku’s biggest launch targets every layer of the problem

The opening keynote from Adam Ronthal and Georgia O’Callaghan set the tone for the summit by framing AI value around three returns: return on intelligence, return on integration, and return on individuals.[5] Dataiku positioned its announcement around that same framework, branding the launch as “The Platform for AI Success.” Conor walked me through three new products that Dataiku announced at the summit, each one targeting a different layer of the production problem.

Dataiku CoBuild brings vibe coding into the data platform, but the comparison to building a web app breaks down quickly. With a web app, you click a button and the page loads or it doesn’t. With a data pipeline, you get summary statistics and a model, but verifying the answer requires a level of inspection that 2,000 lines of generated Python won’t give you. CoBuild takes that generated code and renders it as visual workflows you can step through, edit, and validate. Conor, a data scientist himself, was candid about why this matters.

“Out of 2,000 lines of Python and a machine learning project, there’s probably like 40 that are what’s really, really important. The rest of it is, yeah, okay, did you pull the right data?”Conor Jensen, Global Field CDO, Dataiku

CoBuild abstracts the boilerplate so you can focus on the 40 lines that determine whether the output is trustworthy. It launches in June 2026.

Reasoning Systems tackle a different gap. Conor used the example of a supply chain analyst who today pulls data from five different systems, consults with other teams, and makes a judgment call. Reasoning Systems layer process flows and context on top of data sources, then give an agent the ability to walk through the entire sequence. The key difference from RPA is that not every step is deterministic. Some require the agent to self-correct or stop entirely. Dataiku is building these for targeted use cases in specific industries rather than trying to solve everything at once.

The product Conor said he’s personally most excited about is the Agent Management Platform. Fifty-four percent of organizations are exploring or deploying goal-driven AI agents, according to Gartner.[6] The question most CIOs should be asking is straightforward: how many agents do I have in production across all of my systems? With agents being built and deployed on Databricks, Salesforce, and dozens of other platforms alongside Dataiku, that question is hard to answer today.

“How do I manage all of my agents across my infrastructure, wherever they’ve been deployed? How do I make sure I know that they’re performing, not just functioning, but performing?”Conor Jensen, Global Field CDO, Dataiku

Monitoring whether an agent is running is table stakes. You can do that with an API bus. The Agent Management Platform goes further by adding performance management, a semantic layer, and contextual understanding across every environment where agents are deployed. It evaluates whether agents are delivering the right business results across eight, ten, or twenty different systems. It goes GA in September 2026 and does not require being a Dataiku customer.

Conor had practical advice for organizations that feel stuck waiting for perfect data or an industry standard to emerge.

“News flash. There’s no such thing as perfect data, never will be. You have to just get moving.”Conor Jensen, Global Field CDO, Dataiku

Only 12% of D&A leaders say they are fully prepared to carry out their mandate, according to Gartner’s 2026 CDAO survey.[7] Conor’s point is that treating full readiness as a prerequisite for action is its own form of failure.

Production is the starting line

Conor Jensen has seen the Dataiku platform from both sides over the past decade, and that practitioner-turned-vendor perspective came through in every answer he gave. The industry has spent years talking about getting AI to production. The conversation at Gartner this year made clear that production is only the starting line. The harder work is knowing what happens after you deploy, and most organizations have no way to answer that question across their agent portfolio today.

The next time someone on your team proposes building a new agent, ask a different question first. Do you know how the ones you already have are performing?

Listen to the full conversation with Conor Jensen on the Data Faces Podcast.

Based on insights from Conor Jensen, Global Field CDO at Dataiku, featured on the Data Faces Podcast.

Share

Share TinyTechGuides

Leave a comment

Frequently asked questions

What is Dataiku’s Agent Management Platform? Dataiku’s Agent Management Platform provides cross-platform observability and performance management for AI agents deployed across any system, including those built outside of Dataiku. It goes beyond uptime monitoring by evaluating whether agents are delivering correct business results. The platform adds a semantic layer and contextual understanding so organizations can assess agent performance across eight, ten, or twenty different environments from a single view. It is scheduled for general availability in September 2026.

How is vibe coding a data pipeline different from vibe coding a web app? With a web app, you can visually confirm whether it works by clicking a button and seeing the result. With a data pipeline, AI generates thousands of lines of code that produce summary statistics and a model, but there is no simple way to verify the answer is correct. Dataiku CoBuild addresses this by rendering generated code as visual workflows that users can step through, edit, and validate rather than reading through 2,000 lines of Python.

What are Dataiku Reasoning Systems? Reasoning Systems layer process flows and business context on top of data sources to enable multi-step autonomous decisions. Unlike RPA, where every step is deterministic, Reasoning Systems allow agents to self-correct or stop when results fall outside expected parameters. Dataiku is building these for targeted use cases in specific industries, starting with manufacturing operations, with supply chain and financial risk scheduled for later in 2026.

Why do most agentic AI use cases fail? Gartner estimates that 70% of agentic AI use cases will fail to deliver expected value due to underinvestment in necessary foundations. The top barrier to AI implementation is data availability and quality, cited by 30% of data management leaders. Organizations also struggle with deployment architecture, governance, and change management. Gartner analyst Sarah Turkaly warned that data governance will be the single point of failure for organizations’ AI ambitions.

How many organizations have AI agents in production? According to Gartner research from January 2025 surveying 3,412 respondents, only 6% of organizations have AI agents in production. Fifty-three percent are still in exploration mode, and 25% are piloting. Fifty-four percent of organizations are exploring or deploying goal-driven AI agents, but most cannot answer how many agents they have running across their infrastructure or whether those agents are delivering correct results.

Podcast highlights

[0:00] Introduction at the Gartner D&A Summit and Dataiku overview

[1:27] Three new product announcements: CoBuild, Reasoning Systems, Agent Management Platform

[2:49] Dataiku’s evolution in the age of Gen AI

[3:30] Why AI projects stay stuck in pilot purgatory

[5:30] Deployment architecture that works from dev to production

[7:00] CoBuild, vibe coding, and why data pipelines are different from web apps

[8:26] Why even data scientists need better coding practices

[9:32] Reasoning Systems and autonomous multi-step decisions

[11:01] Agent Management Platform and cross-platform observability

[13:00] Monitoring vs. performance management for agents

[15:00] Opening the gates with governance and guardrails

[17:00] GA timeline, availability, and closing

About David Sweenor

David Sweenor is an AI advisor, author, and the founder of TinyTechGuides. He spent the first half of his career as a practitioner at IBM, building data warehouses and running predictive models, and the second half in product marketing leadership at SAS, Dell, TIBCO, Alteryx, and Alation. He advises Fortune 500 companies on AI strategy, data governance, and go-to-market planning, and hosts the Data Faces Podcast, where he interviews the leaders, practitioners, and technologists shaping the future of data and AI.

Books

- Artificial Intelligence

- Generative AI Business Applications

- The Generative AI Practitioner’s Guide

- The CIO’s Guide to Adopting Generative AI

- Modern B2B Marketing

- The PMM’s Prompt Playbook

Follow David on Twitter @DavidSweenor and connect with him on LinkedIn.


[1]Chandrasekaran, Arun. “Navigating the AI Agent Landscape: A Strategic Guide for IT Leaders.” Gartner D&A Summit 2026, March 2026.

[2]Sallam, Rita. “How to Calculate the Value and Cost of AI Agents.” Gartner D&A Summit 2026, March 2026.

[3]Ramakrishnan, Ramke. “How Is Agentic AI Impacting and Disrupting Your Data Management Discipline?” Gartner D&A Summit 2026, March 2026.

[4]Turkaly, Sarah. “The Future of D&A Governance.” Gartner D&A Summit 2026, March 2026.

[5]Ronthal, Adam and Georgia O’Callaghan. “Navigate AI on Your Data & Analytics Journey to Value.” Gartner D&A Summit 2026 Opening Keynote, March 9, 2026.

[6]Ramakrishnan, Ramke. “How Is Agentic AI Impacting and Disrupting Your Data Management Discipline?” Gartner D&A Summit 2026, March 2026.

[7]Gabbard, Michael. “Signature Series: State of D&A 2026.” Gartner D&A Summit 2026, March 2026.

Discussion about this video

User's avatar

Ready for more?