Agentic AI: Why standards and a fast backbone matter
Agentic AI is moving from theory to reality. Instead of relying on a single large model to answer questions, we now talk about networks of agents. Each agent has a specific role and skillset. Some handle data extraction, others reasoning, others interaction with external systems. Together, they form a powerful ecosystem that can solve complex problems faster and with better results than one model alone.
Let’s start with an example; “The Intelligent Supply Chain”. Consider a global retailer aiming to optimize its supply chain: Forecasting agents predict demand from sales data. Logistics agents track shipments, weather, and disruptions. Finance agents calculate costs and adjust budgets and Compliance agents ensure regulations are met in each market. A shipment delay in Asia must instantly inform demand forecasts in Europe and compliance checks in the US. If this fails, inefficiencies ripple across the chain.
For CIOs and CTOs, this shift raises a clear question: how do you prepare your IT landscape to support Agentic AI at scale? The answer is not just algorithms, it’s about standards, governance, and infrastructure.
Why Standards Like A2A and MCP Matter
The first challenge in Agentic AI is communication. Agents must be able to talk to each other and to Large Language Models (LLMs). Without a standard way of doing so, you create chaos.
Two emerging standards are key:
- A2A (Agent-to-Agent Protocol) defines how agents communicate, exchange data, and request tasks from one another. It prevents the “tower of Babel” scenario.
- MCP (Model Context Protocol) ensures that agents and LLMs share the same context. It governs what data is passed, how memory works, and how context boundaries are managed.
For IT leaders, the analogy is clear: A2A and MCP do for intelligent agents what APIs and governance did for Integration. Without them, you build silos. With them, you enable interoperability, reusability, and long-term scalability.
Standards alone won’t deliver business outcomes. Agentic AI is extremely data-hungry. Each agent may consume and generate large amounts of structured and unstructured data: IoT sensor readings, financial transactions, customer interactions or graph queries.
When dozens or hundreds of agents collaborate, data flows explode. Latency becomes the bottleneck. One slow agent can break an entire decision chain. That is why CIOs and CTOs must think beyond AI algorithms and focus on the backbone: high-throughput, low-latency, and resilient data transport.
Event Mesh as the Backbone
This is where an event mesh architecture makes the difference. Instead of relying on point-to-point connections or central hubs like an Enterprise Service Bus, an event mesh distributes data across a global fabric. Agents and large language models can publish and subscribe to events, ensuring data flows dynamically to the right place at the right time. This creates a foundation that is not only more agile but also future proof for scaling AI-driven ecosystems.
- Scalability is one of the biggest advantages. With an event mesh, you can support thousands of agents without the need for heavy redesigns or complex reconfigurations. New workloads can be added without disrupting existing services, allowing innovation to move faster. This makes it far easier to handle growth as business requirements evolve.
- Low latency is another key benefit. Because events are delivered in real time, decision-making processes can be automated and optimized on the spot. This is critical for scenarios where milliseconds matter, such as fraud detection, IoT monitoring, or supply chain optimization. Instead of waiting for batch processing or manual updates, insights become immediately actionable.
- Resilience is built into the fabric of an event mesh. There is no single point of failure, and if one node or agent goes down, the rest of the system continues to function. Failures are isolated and contained, reducing the risk of large scale outages. This reliability is crucial when critical services depend on uninterrupted data flow.
- Flexibility is equally important. Agents can be added or removed without disrupting ongoing workflows, making the system highly adaptable to change. This enables businesses to experiment, test new use cases, and respond quickly to shifting demands. Whether you want to integrate a new AI model, retire a legacy system, or expand into new regions, the architecture supports it without friction.
This evolution mirrors the journey of modern Integration Platforms. They moved from hardwired connections to more adaptive, event-driven designs. Event mesh technology takes this further, offering a level of agility and robustness that matches the needs of today’s digital ecosystems. It is not just an improvement on old methods, but a necessary step toward truly intelligent and connected systems.
Which Event Mesh wins for Agentic AI?
CIOs and CTOs often hear Solace and Apache Kafka mentioned as event-driven backbones. Both have strong reputations, but for Agentic AI in enterprise production, Solace delivers clear benefits:
- Latency: Kafka focuses on throughput, but Solace offers true real-time distribution, which is essential for agent collaboration.
- Dynamic routing: Kafka topics are static; Solace supports hierarchical topics and wildcards, giving more flexibility as agents are added or removed.
- Global event mesh: Kafka struggles with global scaling without complex add-ons. Solace is designed for hybrid and multi-cloud environments.
- Operational efficiency: Kafka often requires large engineering teams. Solace has built-in event governance, observability, and security.
In Agentic AI, where speed, flexibility, and resilience matter more than raw throughput, Solace consistently wins.
So, back to my example of “The Intelligent Supply Chain”; With Solace as the event mesh, every agent receives the right data in real time. Forecasts are updated, shipments rerouted, budgets reallocated, and compliance assured, without delay or failure. Try building this on Kafka, and you will face latency issues, rigid routing, and heavy operational overhead.
Conclusion
Agentic AI is not futuristic, it’s arriving fast. Workflows in customer service, supply chain, compliance, and fraud detection will soon depend on networks of agents that must communicate and act in real time. So, I advise you: adopt standards, invest in an event mesh backbone and prepare for (massive) scale. Those who lay this foundation today will be ready to capture the strategic advantage of Agentic AI tomorrow.
Interested in discussing this further? I’d be happy to connect.
