The Context Layer: Building the Infrastructure for AI Agents
Failed to add items
Add to cart failed.
Add to wishlist failed.
Remove from wishlist failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
Join our Discord community → https://deco.cx/discord
To build truly autonomous enterprises, you need to manage context with the same rigor used to manage content. In this episode, Guilherme Rodrigues (CEO), Marcos Candeia (CTO), and Tiago Gimenes (Developer) discuss the engineering challenges behind building the Context Management System for AI agents.
We dive deep into the infrastructure required to run agents in production, covering why relying on LLM reasoning alone is brittle and why the industry is shifting toward "Code Execution". We also discuss the architectural decision to move to a cloud-native stack (Docker/Kubernetes) to support self-hosted enterprise needs. Finally, we introduce the MCP Mesh—a governance layer that acts as the "GitHub for Context," allowing teams to manage tools, security, and observability across the organization.
——
Links mentioned:
↳ Anthropic on MCP & Code Execution: https://www.anthropic.com/news/model-context-protocol ↳ MCP Apps: https://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-apps/ ↳ deco Open Source Repo: https://github.com/decocms/mesh
Our Socials:
https://www.linkedin.com/company/getdeco/ https://www.instagram.com/decocms/ https://x.com/deco_cms