As the line between AI automation and autonomous agency blurs, a new technical frontier is emerging, trusted, verifiable agents that can act independently across ecosystems. We sat down with Matthieu Jung, the Product Marketing Manager at iExec, to explore what this shift means for the future of intelligent infrastructure, what problems it solves, and how developers and companies should be thinking differently about agent-based computation.
Ishan Pandey: Thanks for joining us. To start, can you explain what “Trusted AI Agents” actually are in your framework, and how they differ from the typical AI agents we see in automation or LLM-based products?
Matthieu Jung: We all know AI agents are the new interface. They change how we create, build, and collaborate. But to scale, agents need one thing: trust. Privacy is non-negotiable. Agents work with sensitive data: private prompts, crypto assets, wallets. Without privacy there’s no trust, and without trust agents don’t scale. That’s why Trusted AI Agents run entirely inside secure enclaves like Intel TDX. Our Confidential Computing stack keeps their logic, data, and actions private, even from the host, while generating verifiable proofs of execution. Unlike typical LLM-based agents that rely on central APIs and expose user data, iExec provides the trust layer so agents can act autonomously with full confidentiality and proof.
Ishan Pandey: The collaboration between iExec and Eliza OS seems to merge privacy infrastructure with autonomous computation. How are you technically achieving both verifiability and confidentiality? Aren’t they often in conflict?
Matthieu Jung: That’s a great question, and it’s central to what makes Trusted AI Agents different. The tension between confidentiality and verifiability is real in most architectures, but not in ours.
With the ElizaOS x iExec stack, we’re leveraging Trusted Execution Environments (TEEs), specifically Intel TDX, to isolate not just the agent logic built with ElizaOS, but also the AI model itself, the one used to train or fine-tune the agent. This is crucial. It means the entire runtime environment, including both the logic and the sensitive model parameters, is protected against external access, even from cloud hosts or system administrators. At the same time, these enclaves generate cryptographic proofs that certify the integrity of the execution. This is how we achieve verifiability: anyone can verify what was executed, without seeing how it works. There’s no need to choose between exposing the model for auditability or hiding it at the cost of trust. We provide both, thanks to the architecture of Confidential Computing. So to sum up: TDX enclaves bring confidentiality (by securing the model and logic) and verifiability (by producing proofs of correct execution). That’s the foundation of Trusted AI Agents.
Ishan Pandey: In a world where many AI startups rely heavily on centralized APIs and closed models, what does it take to build autonomous agents that are decentralized by design?
Matthieu Jung: It takes a new stack that adds private verifiable execution to APIs. With confidential computing iExec makes it easy to prove every action on chain. iExec just launched the iApp generator, a dev tool that allows developers can build and deploy confidential apps in minutes. Our team is also launching MCP Model Context Protocol servers optimized for Trusted Agents so they can scale while remaining verifiable.
Ishan Pandey: How does the iExec infrastructure help agents prove their work, state, or logic to external parties?
Matthieu Jung: iExec runs agents inside Intel TDX enclaves that generate signed proofs of the code, inputs, and outputs. Those proofs can be shared on chain so anyone can verify the enclave identity and confirm the agent did exactly what it promised, without ever exposing the private data or code.
Ishan Pandey: Let’s talk about risk. What’s the potential downside of allowing autonomous AI agents to operate across chains or applications? Are we ready for truly composable intelligence?
Matthieu Jung: Allowing agents to roam across chains or apps raises new attack surfaces and unexpected interactions. A bug in one protocol could cascade into another. Plus, without clear standards, proving liability or reversing bad actions gets tricky. That’s why trust and governance is still so important when it comes to decentralized agents and automating actions on the blockchain.
Ishan Pandey: Can you walk us through a hypothetical example or use case where a Trusted AI Agent is deployed in the wild, what it does, how it proves its execution, and why it’s better than traditional models?
Matthieu Jung: Imagine AI agents that trade for you, managing DeFi strategies, reading signals, placing trades, and adapting portfolios. It needs to interact with sensitive data and assets like your private key or wallet. With iExec, the agent’s logic runs confidentially in a TEE. Traders can protect their datasets, share access securely with a model, and even receive confidential alerts. Every trade and every decision is executed securely and verifiably. They can even monetize their private trading data. You don’t have to trust the dev or the infra.
Ishan Pandey: Finally, for builders and investors watching the Trusted AI Agent narrative unfold, what are the signals they should be paying attention to over the next 12 months?
Matthieu Jung: We already know agents are the new interface. And unfortunately, that means we will start seeing leaks of agent prompts and data exposures, and that will underscore why private execution matters. Another innovation that will take Agents to the next level is MCP or Model Context Protocol. It lets agents securely share encrypted snapshots of their working state as they move and iterate across multiple apps. iExec has already deployed an MCP server. These trends will reveal who’s truly building decentralized, privacy-first agents versus those still tied to closed APIs.
Don’t forget to like and share the story!