Light
Dark
Company

Letta's next phase

March 16, 2026

Every closed foundation lab today is moving towards developing not just models, but also agents: the harness around models that enables computer use, memory, skills, subagents, and failure recovery.

The most powerful harnesses today leverage computer use, injecting agents into environments where they are free to execute scripts and modify state. Computer use has been an inflection point that has enabled significantly more general-purpose and powerful agents.

Letta has always been about memory and personalization. We made a bet at the start of this company that memory was the key to self-improving artificial intelligence, and we are now applying that focus to the next generation of agent systems. With Letta Code, we are building memory systems into an open, model-agnostic agent harness.

Letta Code is our flagship

Letta Code is a model-agnostic agent harness with persistent memory. It gives underlying language models frontier "agent" capabilities - computer use, skills, subagents, transparent memory systems, and deployment paths - that are not tied to a single lab or model provider.

The closed labs are all moving in the same direction. OpenAI is building computer use directly into its Responses API. Anthropic has Claude Code and the Agent SDK. Google has Gemini CLI and a handful of other agent API/SDK products.

We think the world needs a different option: Letta Code and the Letta Code SDK as an open, extensible runtime that works with every model. You own the memory. You choose the model. You can see how the system works, down to the exact input and output tokens of the LLM itself.

What gets better

Letta is focusing on the primitives that allow anyone to deploy stateful agents with real memory that isn't locked down to a single provider.

Memory moves from specialized memory tools that edit memory in a database to generalized computer use tools like bash that operate over memory projected into git-backed files (context repositories) - aka "MemFS".

Sleep-time compute gets more powerful. We pioneered the idea that agents should reflect, consolidate, and improve automatically. Now those workflows move client-side, where they can use the same computer, tools, and context.

Agent orchestration moves from pure server-side messaging to dynamic subagents and skills. Work can now be delegated, executed, and composed across agents.

Skills become the primary way to package and reuse agent capabilities. They are composable, efficient, and better aligned with how agents learn over time.

The Letta Code SDK makes these systems deployable at scale. The core API exposes the fundamental primitives around the agent.  The SDK allows developers to build on an open agent harness that not only affords frontier coding and computer-use performance, but also is memory-first: designed to build agents that actually learn and grow over time, and whose lifespans far exceed that of any underlying model.

What we are leaving behind

Many of our older features were built for a world where computer use was not the primary mechanism for agent action. They served their purpose - our original server-side agent pattern was designed to support a limited scope of what agents could do. Agent capabilities have grown far beyond our original design, and we are refocusing Letta around supporting and expanding those capabilities.

We are sunsetting a set of server-side features in favor of stronger client-side and runtime-native replacements:

  • Letta Filesystem becomes actual filesystem access, context repositories, and computer use.
  • Legacy server memory tools like core_memory_replace will be removed in favor of straightforward filesystem operations on git-backed context repositories.
  • Templates will be replaced by versioned Letta Code SDK and community tooling like lettactl.
  • Identities move to the application layer using tags.
  • Server-side MCP integrations give way to client-side skills.
  • Server-side sleep-time agents will be replaced by a client-side subagent system.
  • Hardcoded server-side multi-agent tools give way to subagents, dynamic agent-discovery skills that build on general-purpose API patterns.
  • Tool rules (which constrain agent actions based on hard-coded rulesets) are deprecated to avoid inhibiting frontier capabilities.

Transition path

We are focusing Letta on the execution model that best supports frontier agents. This shift gives agents more capability, more transparency, and more room to improve over time. We will make the transition practical with migration guides, office hours, direct support in Discord, Ezra, and clear timelines for every immediate and phased change.

Users should expect deprecation of some features like tool rules and legacy tools immediately, while more major features like templates and filesystem will be deprecated by the end of April.

If you have questions about these changes or how they affect your systems, we want to hear from you.

The mission

Letta builds agents that learn. Agents with persistent memory, real computer access, and the infrastructure to improve from their own lived experience and work. Letta Code is the runtime that brings these together: git-backed memory, skills, subagents, and deployment that works across every model provider.

Start building today.

Jul 7, 2025
Agent Memory: How to Build Agents that Learn and Remember

Traditional LLMs operate in a stateless paradigm—each interaction exists in isolation, with no knowledge carried forward from previous conversations. Agent memory solves this problem.

Jul 3, 2025
Anatomy of a Context Window: A Guide to Context Engineering

As AI agents become more sophisticated, understanding how to design and manage their context windows (via context engineering) has become crucial for developers.

May 14, 2025
Memory Blocks: The Key to Agentic Context Management

Memory blocks offer an elegant abstraction for context window management. By structuring the context into discrete, functional units, we can give LLM agents more consistent, usable memory.

Feb 13, 2025
RAG is not Agent Memory

Although RAG provides a way to connect LLMs and agents to more data than what can fit into context, traditional RAG is insufficient for building agent memory.

Feb 6, 2025
Stateful Agents: The Missing Link in LLM Intelligence

Introducing “stateful agents”: AI systems that maintain persistent memory and actually learn during deployment, not just during training.

Nov 14, 2024
The AI agents stack

Understanding the AI agents stack landscape.

Nov 7, 2024
New course on Letta with DeepLearning.AI

DeepLearning.AI has released a new course on agent memory in collaboration with Letta.

Sep 23, 2024
Announcing Letta

We are excited to publicly announce Letta.

Sep 23, 2024
MemGPT is now part of Letta

The MemGPT open source project is now part of Letta.

Mar 4, 2026
Remote Environments for Letta Code

Using remote environments, you can message an agent working on your laptop from your phone. 

Jan 21, 2026
Conversations: Shared Agent Memory across Concurrent Experiences

The Conversations API allows you to build agents that can maintain shared memory across parallel experiences with users

Dec 16, 2025
Letta Code: A Memory-First Coding Agent

Introducing Letta Code, a memory-first coding agent. Letta Code is the #1 model-agnostic open source agent on the leading AI coding benchmark Terminal-Bench.

Dec 1, 2025
Programmatic Tool Calling with any LLM

The Letta API now supports programmatic tool calling for any LLM model, enabling agents to generate their own workflows.

Oct 23, 2025
Letta Evals: Evaluating Agents that Learn

Introducing Letta Evals: an open-source evaluation framework for systematically testing stateful agents.

Oct 14, 2025
Rearchitecting Letta’s Agent Loop: Lessons from ReAct, MemGPT, & Claude Code

Introducing Letta's new agent architecture, optimized for frontier reasoning models.

Sep 30, 2025
Introducing Claude Sonnet 4.5 and the memory omni-tool in Letta

Letta agents can now take full advantage of Sonnet 4.5’s advanced memory tool capabilities to dynamically manage their own memory blocks.

Jul 24, 2025
Introducing Letta Filesystem

Today we're announcing Letta Filesystem, which provides an interface for agents to organize and reference content from documents like PDFs, transcripts, documentation, and more.

Apr 17, 2025
Announcing Letta Client SDKs for Python and TypeScript

We've releasing new client SDKs (support for TypeScript and Python) and upgraded developer documentation

Apr 2, 2025
Agent File

Introducing Agent File (.af): An open file format for serializing stateful agents with persistent memory and behavior.

Jan 15, 2025
Introducing the Agent Development Environment

Introducing the Letta Agent Development Environment (ADE): Agents as Context + Tools

Dec 13, 2024
Letta v0.6.4 release

Letta v0.6.4 adds Python 3.13 support and an official TypeScript SDK.

Nov 6, 2024
Letta v0.5.2 release

Letta v0.5.2 adds tool rules, which allows you to constrain the behavior of your Letta agents similar to graphs.

Oct 23, 2024
Letta v0.5.1 release

Letta v0.5.1 adds support for auto-loading entire external tool libraries into your Letta server.

Oct 14, 2024
Letta v0.5 release

Letta v0.5 adds dynamic model (LLM) listings across multiple providers.

Oct 3, 2024
Letta v0.4.1 release

Letta v0.4.1 adds support for Composio, LangChain, and CrewAI tools.

Feb 12, 2026
Introducing Context Repositories: Git-based Memory for Coding Agents

We're introducing Context Repositories, a rebuild of how memory works in Letta Code based on programmatic context management and git-based versioning.

Dec 11, 2025
Continual Learning in Token Space

At Letta, we believe that learning in token space is the key to building AI agents that truly improve over time. Our interest in this problem is driven by a simple observation: agents that can carry their memories across model generations will outlast any single foundation model.

Dec 2, 2025
Skill Learning: Bringing Continual Learning to CLI Agents

Today we’re releasing Skill Learning, a way to dynamically learn skills through experience. With Skill Learning, agents can use their past experience to actually improve, rather than degrade, over time.

Nov 7, 2025
Can Any Model Use Skills? Adding Skills to Context-Bench

Today we're releasing Skill Use, a new evaluation suite inside of Context-Bench that measures how well models discover and load relevant skills from a library to complete tasks.

Oct 30, 2025
Context-Bench: Benchmarking LLMs on Agentic Context Engineering

We are open-sourcing Context-Bench, which evaluates how well language models can chain file operations, trace entity relationships, and manage multi-step information retrieval in long-horizon tasks.

Aug 27, 2025
Introducing Recovery-Bench: Evaluating LLMs' Ability to Recover from Mistakes

We're excited to announce Recovery-Bench, a benchmark and evaluation method for measuring how well agents can recover from errors and corrupted states.

Aug 12, 2025
Benchmarking AI Agent Memory: Is a Filesystem All You Need?

Letta Filesystem scores 74.0% of the LoCoMo benchmark by simply storing conversational histories in a file, beating out specialized memory tool libraries.

Aug 5, 2025
Building the #1 open source terminal-use agent using Letta

We built the #1 open-source agent for terminal use, achieving 42.5% overall score on Terminal-Bench ranking 4th overall and 2nd among agents using Claude 4 Sonnet.

May 29, 2025
Letta Leaderboard: Benchmarking LLMs on Agentic Memory

We're excited to announce the Letta Leaderboard, a comprehensive benchmark suite that evaluates how effectively LLMs manage agentic memory.

Apr 21, 2025
Sleep-time Compute

Sleep-time compute is a new way to scale AI capabilities: letting models "think" during downtime. Instead of sitting idle between tasks, AI agents can now use their "sleep" time to process information and form new connections by rewriting their memory state.