Trust3 AI

The Hardest Part of AI Isn’t Building It. It’s Governing It.

Ibby Rahmani

by Ibby Rahmani

Last updated on April 6, 2026

The Hardest Part of AI Isn't Building It. It's Governing It.
To Top

Opening the “Governance Black Box” with Balaji Ganesan and Sanjeev Mohan at Gartner 2026.

For years, enterprises have invested billions in modernizing their data infrastructure. Data lakes, lakehouses, streaming pipelines, and real-time analytics platforms have transformed how organizations collect and analyze information.

But as artificial intelligence moves rapidly from experimentation to production, a new realization is emerging across the enterprise:

The hardest challenge is no longer building AI. It’s governing it.

That theme surfaced repeatedly at the Gartner Data & Analytics Summit 2026. Across sessions, hallway conversations, and executive roundtables, leaders kept returning to the same question: how do organizations govern AI systems that are becoming increasingly autonomous and complex?

During the summit, I had the opportunity to sit down with Balaji Ganesan and Sanjeev Mohan to unpack what enterprises are experiencing as AI adoption accelerates.

What emerged from the conversation was clear: AI governance is not just an extension of data governance. It represents an entirely new frontier.

From Data Governance to AI Governance

Over the past two decades, organizations have steadily matured their data governance programs. Enterprises created policies around data quality, privacy, compliance, and security. Dedicated teams were established to manage these governance frameworks, and a growing ecosystem of tools emerged to support them.

AI introduces a new set of dynamics.

Unlike traditional analytics systems, AI models generate outputs dynamically, respond to unpredictable inputs, and increasingly act autonomously through agents and copilots. This dramatically expands the governance surface area.

Challenges in AI Governance

Sanjeev explained that the challenge starts with the fact that AI governance is still an evolving discipline. Data governance, by comparison, has had decades to develop well-understood frameworks and best practices.

“The first issue with AI governance is that it’s not like data governance, which by now is pretty well defined,” said Sanjeev Mohan. “AI governance is really a sea of topics.”

That phrase captures the heart of the problem. AI governance is not a single domain; it’s an intersection of multiple disciplines, including data management, model oversight, security, ethics, and compliance. As organizations scale AI across business operations, the need to coordinate all of these elements is becoming increasingly urgent.

A Governance Landscape That Keeps Expanding

One of the most striking insights from the discussion was how quickly the scope of governance expands once AI systems are deployed in real-world environments.

Many organizations initially approach AI governance from a technical perspective. They focus on the infrastructure layer, in which they are managing models, orchestrating AI governance frameworks, and managing service endpoints that deliver AI capabilities.

But as Sanjeev noted, infrastructure governance represents only a small portion of the overall challenge. Once AI systems begin interacting with users and enterprise data, entirely new AI risks emerge. Prompt injection attacks, for example, have become a major concern. Malicious prompts can manipulate models into producing unintended outputs or exposing sensitive information.

AI Governance: Risks Involved

To address and subsequently manage these AI risks, organizations are increasingly adopting practices such as “red teaming” and are deliberately testing AI systems to uncover vulnerabilities before they can be exploited.

Another critical governance dimension involves model outputs. Enterprises must ensure that AI responses are reliable and do not produce hallucinations, misinformation, or proprietary data leakage.

“What comes out of the model is critical,” Sanjeev explained. “You want to make sure it’s reliable and that it’s not exposing proprietary or harmful information.”

These concerns are further complicated by issues such as bias detection, regulatory compliance, and national data sovereignty requirements. Each layer adds additional complexity to AI governance frameworks that were originally designed for static datasets and deterministic systems.

The Unstructured Data Challenge

One of the most revealing moments in the conversation came when the topic shifted to data quality. Traditional data governance frameworks were built around structured data tables with defined schemas, fields, and attributes. Organizations know how to validate these datasets using well-established rules for completeness, timeliness, and duplication.

But AI systems are increasingly powered by unstructured information. Documents, PDFs, emails, transcripts, and images are becoming core inputs to AI models. These forms of content do not fit neatly into the structured validation frameworks that enterprises have relied on for decades.

Sanjeev highlighted how dramatically this changes the governance problem.

“When it’s structured data, we know what to check – timeliness, incomplete records, duplicates,” he said. “But how in the world do you do data quality on a PDF document?”

The question may sound simple, but it exposes one of the biggest gaps in current governance strategies. Enterprises must now rethink how they measure quality and trustworthiness for content that lacks a defined structure. As AI adoption expands, solving this challenge will become increasingly important.

The Rise of Agent-Generated Data

Another major shift discussed during the conversation centers on how AI systems create data themselves.

Historically, enterprise data followed a predictable lifecycle. Operational systems generated data that was later transformed and stored in analytics environments where analysts and decision-makers could access it.

AI workloads are fundamentally altering this pattern.

Balaji explained that modern AI environments introduce agents that both consume and generate data as they interact with users and applications. These interactions create new artifacts that did not previously exist in enterprise data ecosystems.

“In AI workloads, agents not only consume data, but they also generate their own data,” said Balaji Ganesan.

This agent-generated data may include cached responses, behavioral signals, personalization context, and reasoning traces. While these artifacts help AI systems improve performance and deliver more contextual results, they also introduce new governance considerations.

Organizations must now determine how long this data should be stored, who has access to it, and how it should be monitored for compliance and risk. In many ways, this represents a new category of enterprise data – one that did not exist before the rise of AI agents.

The Organizational Question: Who Owns AI Governance?

Beyond the technical challenges, AI governance introduces a significant organizational dilemma. For the past twenty years, enterprises have gradually built data governance teams responsible for policy management and compliance oversight. These teams developed processes and tools to ensure that data assets were properly managed across the organization.

AI governance, however, cuts across many more disciplines.

It touches infrastructure security, data management, legal compliance, risk management, and ethical oversight. As a result, it is not immediately clear which group within the enterprise should take primary ownership.

Balaji raised this question directly during the conversation: will organizations eventually create dedicated AI governance leaders, or will responsibility remain distributed across multiple teams?

Sanjeev suggested that the industry is still searching for the right answer.

“The scope is so wide that there’s no consensus, yet on who should own it,” Sanjeev said. “In many ways, we don’t know what we don’t know yet.”

In the near term, governance responsibilities will likely remain shared across several departments, each addressing different aspects of the challenge. Over time, however, enterprises may begin establishing dedicated roles focused specifically on AI governance.

CONCLUSION: A New Era of Enterprise Governance

If this conversation at the summit revealed anything, it is that AI governance is quickly moving from theory to urgency. Organizations are deploying AI systems at an unprecedented pace. Agents, copilots, and generative applications are becoming embedded in everything from customer service workflows to internal decision-making processes. These systems are consuming enterprise data, generating new data, and influencing business outcomes in ways that traditional analytics systems never did.

“AI governance is becoming one of the most important conversations happening in the data world right now,” said Balaji Ganesan.

To keep pace with this transformation, enterprises must rethink governance frameworks that were originally designed for static datasets and deterministic analytics pipelines. The next generation of governance will need to address not only data, but also models, prompts, agents, outputs, and the growing ecosystem of AI-generated information. In other words, the future of governance will be defined by how well organizations can unify data governance and AI governance into a single operational framework.

Finally, judging by the conversations at this year’s Gartner Data and Analytics summit, that journey is only just beginning.

Watch the video.

Explore how Trust3 AI helps organizations operationalize AI governance and build trusted data foundations. If you’re ready to move beyond experimentation and into production-scale AI, you can also schedule a demo to see how Trust3 supports secure, enterprise-ready AI deployments.