April 16, 2026
The Rise of Agentic AI — Why Agents Need Sandboxes
From Chatbots to Agents
2026 is the year AI went from chatting to doing. The shift is fundamental: instead of generating text responses, AI systems now take real actions in real environments. They write code and run it. They browse the web and extract data. They manage files, install packages, and commit to git repositories. These aren't chatbots — they're agents.
The Agent Landscape
The ecosystem has exploded. Every major AI lab now ships an agent product:
- Claude Code (Anthropic) — CLI agent that reads codebases, writes files, runs tests, and manages git. Ships as an npm package, runs autonomously with
--dangerously-skip-permissionsfor sandboxed environments. - Codex CLI (OpenAI) — Open-source terminal agent that executes shell commands, edits files, and operates with different sandbox policies. Built in Rust, supports full-auto mode.
- Devin (Cognition) — Marketed as the first AI software engineer. Handles end-to-end development: planning, coding, debugging, deployment.
- Computer Use Agents — Agents that control browsers and full desktops via screenshots and mouse/keyboard actions. Both Anthropic and OpenAI ship this capability.
Gartner predicts that by 2028, 33% of enterprise software will include agentic AI — up from less than 1% in 2024. The SWE-bench benchmark, which measures agents on real GitHub issues, shows top agents now resolving 40-50%+ of issues autonomously. The trajectory is clear.
The Infrastructure Gap
Here's the problem nobody talks about: where do these agents actually run?
An agent that writes code needs to execute that code to test it. An agent that browses the web needs a browser. An agent that installs packages needs a filesystem. An agent that manages infrastructure needs shell access. You can't do any of this safely on your local machine or your production server.
# The agentic AI stack
User prompt
→ LLM (reasoning & planning)
→ Agent (tool selection & execution)
→ Sandbox (isolated compute environment)
The LLM provides intelligence. The agent provides autonomy.
The sandbox provides safety. All three are required.
Why Existing Solutions Fall Short
Local execution— agents run on the developer's machine. One bad rm -rfor a malicious package and you're dealing with data loss or compromise. Claude Code and Codex both warn about this in their docs.
Docker containers — share the host kernel. Container escapes are found regularly (CVE-2024-21626, CVE-2022-0185). An agent running arbitrary code inside a container is an active threat to the host.
Cloud VMs — secure but slow. Booting an EC2 instance takes 30-60 seconds. At $0.10/hr minimum, idle VMs waiting for agent tasks burn money. And you need to manage the infrastructure.
How e2a Fills the Gap
e2a provides purpose-built infrastructure for agent execution. Every sandbox is a Firecracker microVM — hardware-level isolation with sub-second boot time. The agent gets a full Linux environment with its own kernel, memory, and network. No shared resources with other tenants.
Three agent types ship today: Deidict (our first-party agent), Claude Code (Anthropic), and Codex (OpenAI). BYO your LLM key. The sandbox is agent-agnostic — swap agents by changing one parameter. The agents bring the intelligence. We bring the infrastructure.