What it really takes to build for the AI-native era


Rich Waldron
CEO
Discover what it really takes to succeed in the AI-native era—from governance and infrastructure to cross-functional teams. Learn why enterprise AI demands more than just models.
AI adoption is happening fast. Faster than most organizations are ready for.
Every app has its own model. Every department is experimenting. But few enterprises have a clear path from prototype to production. And fewer still have the infrastructure to make AI safe, scalable, and impactful across the business.
In Gartner’s opening keynote at the Application Innovation & Business Solutions Summit, analysts are calling for a reset. To succeed in what they’re calling the “AI-native era,” enterprises need stronger foundations spanning technology, talent, and culture.
Here’s what that means in practice and what we’re seeing in the field.
Fragmented AI is a governance problem, not just a tech problem
Most CIOs we speak to are already fielding demands from every corner of the business to deploy agents, integrate LLMs, and deliver AI-driven outcomes.
But with that demand comes a familiar pattern: app-specific agents, siloed decision-making, and inconsistent rules. IT ends up refereeing contradictions between disconnected models. An HR tool flags an issue while the compliance system ignores it. One model approves a vendor contract, another blocks it.
This is both frustrating and risky. Without shared governance and orchestration, enterprises face:
Conflicting AI-driven decisions
Security gaps from unsanctioned models
Compliance failures from ungoverned data
As I mentioned in a recent blog, AI agent sprawl is becoming a liability. Enterprises need a strategy for integrating and governing AI, not just enabling it.
Fusion teams, not silos, will drive successful AI adoption
In a recent Gartner report, keynote speaker Annie Hodgkins and other analysts define three types of business-led “fusion teams” that are delivering real results with AI: situational, flex, and innovation. What they share is a structure that blends domain knowledge with technical execution.
This shift away from rigid, function-specific teams is already happening. Over 90% of business technologists now work in some form of fusion team. But these teams can’t succeed in isolation. They need access to shared infrastructure, integration support, and consistent guardrails across applications and data.
The best AI strategies assign ownership and create systems that support cross-functional execution.
Platform engineering is the foundation of the AI-native enterprise
Building AI into your business isn’t just about choosing a model. It’s about making those models safe to use, discoverable by developers, and integrated with the rest of your systems.
That’s where platform engineering comes in.
According to Gartner, internal developer platforms (IDPs) are key to simplifying AI adoption and enabling developers to safely access AI capabilities across the business. Done right, an IDP becomes the “paved road” for AI delivery—complete with templates, observability, governance, and toolchains.
But most integration platforms weren’t built with this in mind. Traditional iPaaS can’t handle the unstructured data, LLM orchestration, or AI agent workflows modern enterprises need.
Why infrastructure matters more than the model
Most enterprises aren’t struggling to find use cases for AI. They’re struggling to implement them consistently. As pressure mounts to move faster, many leaders are considering how to build AI capabilities into the foundation of their architecture.
But here’s where it gets tricky: Traditional integration platforms weren’t designed with AI in mind. They were built for connecting APIs, not for orchestrating dynamic agents, handling unstructured data, or enforcing AI-specific governance policies.
That’s why we’re seeing a shift toward more composable platforms. The kind that support:
Low-code access to data and services across the stack
Native observability into how AI systems act, not just respond
The ability to swap models and tools as strategies evolve
Centralized governance and versioning for tools, policies, and knowledge
Whether you're embedding AI into internal workflows or customer-facing experiences, the right foundation will determine your ability to scale. Choosing the right LLM is important, but even more important is the infrastructure you build around it.
The takeaway
The foundations for an AI-native enterprise aren’t just about tools. They’re about orchestration. Composability. Governance. And support for the teams driving innovation forward.
The companies that build these foundations now will scale faster, stay safer, and get more value from AI. The rest? They’ll spend the next few years playing referee.