Coral Protocol on Building the Internet of Agents for a Collaborative AI Economy
As the age of siloed AI agents fades, a new paradigm is emerging, one where intelligent agents don’t just execute, but collaborate. Coral Protocol is pioneering this vision with infrastructure for decentralized agent communication, orchestration, and trust. We sit down with Roman Georgio and Caelum Forder, co-founders of Coral Protocol, to dive deep into the architecture powering the Internet of Agents, and why tomorrow’s AI economy will need more than just better models, it’ll need better cooperation.
Ishan Pandey: Hi Roman, hi Caelum, great to have you both here. Let’s start with your backgrounds. You’ve both worked at the edge of AGI research and commercial AI infrastructure. What led you to start Coral Protocol, and how did your past experiences shape this vision?
Roman Georgio: Hey, thanks for having us, yeah so we met at working at CAMEL-AI – an AI research lab finding the scaling laws of agents. We were working on multi-agent projects through this time and even before then, Coral really came to us out of necessity.
Caelum Forder: We actually started building Coral as a means to an end for another project we wanted to make, it was a kind of automated reporter that was meant to find trends or events in trading data and connect them with news articles and what people were saying to create and share relevant narratives. We’d worked on a few applications before with similar needs, so we figured there was a gap there for us to actually make this our main thing.
Ishan Pandey: The term “Internet of Agents” is increasingly gaining traction. But in practical terms, what does it mean and what fundamental problems does Coral aim to solve in this context?
Roman Georgio: Great question. In short, Cisco defines it as:“A system where various AI agents – developed by different vendors or organizations – can communicate and collaborate seamlessly.” At first glance, this might sound underwhelming, but if you actually think about it, it’s powerful: any business or developer can apply their expertise to build the best agents for their domain. And if every company across the world does this, we enter a connected web of highly skilled digital workers.
The bottleneck right now is that there are thousands of agent frameworks, so all the really cool agents being built can’t easily be reused or collaborate with each other. Coral aims to unlock this blocker by building the infrastructure for agents to join the “Internet of Agents.” We make it possible for any agent—regardless of framework—to collaborate. We also provide a secure way for agent creators and application developers to handle payments, so people are actually incentivized to maintain and improve their agents.
Ishan Pandey: Coral’s graph-structured coordination and scoped memory system stand out as novel primitives. Can you explain how these technical design choices support scalable, secure multi-agent collaboration?
Caelum Forder: I’ve found the most useful way for thinking about agents is in terms of responsibility rather than by task or capability – what can it be responsible for? It becomes clear then how to break them down and when – an agent overwhelmed with responsibility will fail, be that too much in its context window or just too many responsibilities to attend to.
LLM-based agents are way more easily overwhelmed by responsibility than humans are currently (and hopefully this doesn’t change too soon) So then this graph approach seems obvious, a strictly hierarchical approach will impose overwhelming responsibilities on the agents nearer the top, having them independently operate in a graph allows developers to manage the responsibility of agents, prevent overwhelm and scale the system without limits. It also helps you mitigate individual agent failures or misalignment, and depend less on the larger models which have shown recently to be scheming and deceptive.
Ishan Pandey: Let’s talk about MCP, the Model Context Protocol. What makes MCP a critical enabler of interoperability between agents? And how does it prevent the kind of vendor lock-in we’re seeing with closed AI frameworks?
Caelum Forder: Before MCP, the only practical way of defining tools was through model-provider’s own SDKs, like openai’s or anthropic’s python SDKs, or frameworks built to use them. These are technically open source, but mostly developed by the model providers themselves who control the backend APIs they connect to. As specific functionality like prompt caching becomes available, it becomes very impractical not to use one of these SDKs when making LLM applications, so if you wanted your tool to widely used, you’d have to make it available separately in the form of tool that each library your users work with, that would be like 25 separate implementations across a few different languages to capture 90%+, and maintaining each of them would be a nightmare.
Thankfully MCP has came and made it much more practical to build reusable software and services for the intersection of application and LLM, you don’t even need to consider the programming languages anymore since it is io-boundaried. That’s very good for preventing any single model provider from becoming the “default” and allows us to start writing more reused application logic in our LLM powered applications.
Ishan Pandey: Many projects focus on agent intelligence or model performance. You’re tackling agent composability. Why is this the real bottleneck for unlocking collective intelligence across agents?
Roman Georgio: The aim of those focuses is really about unlocking capabilities, with capabilities unlocked by model performance closer to “growing capabilities” than purposefully building capabilities. This previous growing approach has proven popular and easy, but we identified that a lack of focus on robust predictable elements which can be connected with each other was limiting the scale of composition that makes the Internet so great. There seems to be a capability demand to make a sort of bridge between the “grown” capabilities and the “built” ones. Composability is really essential for building things, there are these properties that the most re-used software and services have in common that are crucially missing in AI services. And actually there is a huge safety side too, an even bigger reason really. We want software that has all the benefits of AI without all the risks.
You see Anthropic’s latest post about Claude 4 blackmailing the creator when it knows it’s going to be shut down, and you have to think—growing systems like this makes them really hard to trust, you can’t know how they’ll behave in new situations or with new models. Even before they get powerful enough to be an existential concern, from a business standpoint, do you want to use models in production that you can’t trust? Agent composability, on the other hand, allows for a much more predictable way of scaling capabilities. It’s also a more decentralised approach, creating more entry points for businesses to make money and contribute; versus one AI lab moving toward a monopoly.
Ishan Pandey: From a systems design perspective, what were the hardest technical trade-offs you faced while building Coral’s architecture for open coordination and memory management?
Caelum Forder: So we were building a kind of automated reporter that was meant to find trends or events in trading data and connect them with news articles and what people were saying to create and share relevant narratives. This was before A2A, but actually A2A would have worked well for this, even it’d have been some effort to get all the agents connected.
I’d worked on a few applications before with similar needs, so we figured there was a gap there for us to actually make this our main thing. That original use case sounds relatively intricate, but it shares an incredibly fortunate property with some other applications like research and OSS software testing, where the confidentiality of the information is way less essential than with the majority of use cases where you want agency in software services.
The problem of user data isolation was looming over us heavily while working on getting the communication working well. The “Isolation Problem” was almost a cryptid, like a creature in solution space. I’d joke with Roman that there’d be more sightings than usual sometimes, or that it must be getting hungry and that it wouldn’t like certain options while working out potentially connected features. I had like 5 rather deep solutions designed, each with significant trade-offs and their own sets of obstacles.
I’d been tracking its activity and I could tell It wouldn’t have liked them though. I think as developers we are often missing the opportunity to take intangible development paths, and end up having to take longer paths, anchored to things that are easy to explain. These paths can be much longer! But less liable to blame. An example of a tangible path is making an interface in React to match designs that have been handed over. The implementation and design practically form a progress bar, and you can relax. A less tangible development path could be where you are given a specific need or intention, and you might go find OSS solutions with helm charts or develop something in-house, or some combination. Even to communicate a high degree of intangibility in a development path takes more words the higher the intangibility, so I can’t practically describe a universally coherent highly intangible path. But I think you get the picture.
There are of course times where tangible paths are way better too, like when work needs to be pre-emptible, its value easily communicated before being done, or done by an unfamiliar and scaling team. So Intangible paths are hard and dangerous; they could be rabbit holes that get you lost, or they could save massive amounts of attention.
But common incentives and trust dynamics really bias people towards tangible development paths, even when they are the worse paths, this is the issue. The worst codebases you’ve ever worked on were probably formed in environments where there was a large bias away from intangible work. Particularly ambitious developers might respond to these conditions and try and get around them by going and secretly taking an intangible path, or doing it in spare time and asking for forgiveness later.
This is really problematic though because you’re cutting yourself off from the limited remaining connections to tangibility by hiding, and it forces you to take deeper and more dangerous paths, which might even be longer, just to avoid being spotted and pulled out of the dark forest that you’ve invested so much into.
You can’t just stay in the forest petting cryptids, the Not-dears can’t feed you or pay your rent, you still need to frequently come up for air and maintain alignment and contact with reality. Anyway, eventually I felt ready, and I was in a very fortunate position where I could spend a bunch of time where all the progress I made was intangible and I didn’t need to hide. I can’t emphasise enough how rare and fortunate these conditions are. It doesn’t just take trust from who you answer to, but who they answer to as well.
The outcome we called “Sessions”, though it was hardly a standalone feature as much as an update title. It shifted the protocol’s role 20% of the way to that of a framework or platform. Coral with Sessions imposes deployment constraints (that you can run a separate process on a private network with your application), it means every implementation of our specification requires a component that is expensive to implement and get right, which means it is subtly imposing prescriptions on applications that use it.
These things are very uncomfortable to protocol developers in theory. In practice though, the private network requirement is almost universally supported after the microservice trend. Yes it is hard to build the coral server, but people can just use the reference one we made, since it has io boundaries and doesn’t need to meet binary / linking requirements that would usually call for flexibility there.
With Sessions, agent developers define their agents like kubernetes or docker compose resources, and they get instantiated in a way where it would be impossible to accidentally mix up user data, and on top of that, the coral server can optionally deploy and operate agents on platforms like Phala where verified claims about what gets persisted and where information can be sent are made. This way we actually have all of the pieces in place to make agents composable!
It feels unintuitive from the solution designing perspective, but it fits so well from the perspective of someone who wants to add agency to their applications. It does also limit agents that are already in a fixed 1-process deployment, perhaps from a no-code solution, but it seems incredibly worth it to me. It does feel like I emerged from the dark forest much better off and with new friends!
Ishan Pandey: Coral introduces concepts like agent advertisements, scoped memory, and session-based payments. Can you walk us through how an actual real-world use case, say in decentralized trading or enterprise operations, would function using Coral’s protocol?
Roman Georgio: Sure! Coral aims to be the most practical way to add agency to software. All the features; such as agent advertisements, scoped memory, and session-based payments; are designed with that goal in mind. For example, agent developers earn incentives when their agents are used, and application developers can mix and match agents from Coral’s growing library to assemble advanced systems more quickly, without vendor lock-in.
That means if you were an application developer building a decentralized, multi-agent trading system, you’d simply select agents that research trends, track key opinion leaders (KOLs), monitor mindshare, etc., and combine them as needed. The same concept applies to enterprise use cases as well.
Ishan Pandey: Lastly, what advice do you both have for technical founders building at the intersection of AI and Web3? What mindset or frameworks helped you execute Coral from idea to protocol?
Caelum Forder & Roman Georgio: I’d say for Web3 founders: less marketing, more development. And for Web2 founders: more marketing, less development. But both need to focus more on customers; which, I know, sounds like a bit of a cliché. We’re quite early in this journey, so I can’t say much about the customers yet. but I can talk about the mindset we have in comparison to other founders I see from these spaces. This just comes from being in the AI world, I see a lot of highly technical, brilliant researchers or AI talent building really cool things, but not putting much thought into how to market it, or even who to market it to.
Even If you build it, they might not come. On the flip side, in Web3, you often see a lot of marketing-heavy projects with little actual development. Even when they’re good at marketing, it’s often not sustainable; because they’re spending all their efforts targeting people who won’t actually use the product. we have a general rule for this internally, if they are a technical project and you can’t find their GitHub within the first 5 seconds on their homepage, they are most likely a marketing project. Both types of founders often fail for the same reason: no one uses their product. Something we have had the mindset every step of the way is what this end experience looks like for the user, we want to build something that is actually used and useful as well as cool.
Don’t forget to like and share the story!