How MCP Gives Your AI Tools Persistent Memory
You spent an hour explaining your auth architecture to Claude. Tomorrow it won't remember any of it. Here's how to fix that with one npm install and a three-line config.
The problem is architectural
Every AI coding tool -- Claude Code, Cursor, Windsurf, Gemini CLI -- starts each session with a blank slate. There's no shared memory between sessions, no way to carry context across tools, and no persistence when you switch machines.
You end up re-explaining the same things: "We use Zustand, not Redux." "Auth middleware is in src/middleware/auth.ts." "Never commit to main directly." Over and over.
The root cause: these tools have no memory layer. They read your code, they read your prompts, but they have nowhere to store what they learn about your project and preferences.
What MCP is (30-second version)
Model Context Protocol (MCP) is an open standard that lets AI tools connect to external services. Think of it as a USB port for your AI -- you plug in an MCP server, and the AI gets new capabilities (tools, resources, context) without any changes to the AI itself.
Any tool that supports MCP -- Claude Code, Cursor, Windsurf, Cline, and others -- can use any MCP server. One protocol, many tools. This is what makes it possible to build a memory layer that works across all of them.
How memoir uses MCP to solve this
memoir is an MCP memory server that gives your AI tools six memory abilities: memoir_recall, memoir_remember, memoir_list, memoir_read, memoir_status, and memoir_profiles.
When you connect memoir via MCP, your AI can search your saved memories, read full context from previous sessions, and save new knowledge for the future. All of this happens automatically inside the conversation -- no copy-pasting, no manual files.
The memories sync across machines via encrypted cloud storage. So the context you build up in Claude Code on your laptop is available in Cursor on your desktop, or Gemini CLI on your server.
Setup: 3 minutes, 2 steps
Step 1: Install and initialize
npm install -g memoir-cli memoir init
memoir init creates your local memory store and walks you through authentication. Your memories are encrypted before they leave your machine.
Step 2: Add the MCP server to your AI tool
For Claude Code, add this to ~/.mcp.json:
{
"mcpServers": {
"memoir": { "command": "memoir-mcp" }
}
}
For Cursor, add to .cursor/mcp.json:
{
"mcpServers": {
"memoir": { "command": "memoir-mcp" }
}
}
Same config for both. That's the power of MCP -- one server, every tool.
That's it. No API keys, no Docker, no background processes. memoir-mcp runs as a stdio server that your AI tool manages automatically.
Real example: memoir_recall in Claude Code
Here's what a real session looks like after memoir is connected. You ask Claude a question about your project, and it automatically searches your memories before answering:
# You ask Claude about your project's auth setup you: how does auth work in this project? # Claude automatically calls memoir_recall memoir_recall("auth setup architecture") Found 3 memories matching "auth" claude: Based on your previous sessions, this project uses JWT auth with refresh tokens. The middleware lives in src/middleware/auth.ts, and you chose Zustand over Redux for auth state management (decided March 12). The refresh token rotation was added in the session on March 15 to fix the logout-on-deploy issue.
Claude didn't re-read your entire codebase to figure that out. It searched your memoir memories and found the context from previous sessions -- including why decisions were made, not just what the code looks like today.
And saving new context is just as seamless:
# Later in the same session you: remember that we switched from bcrypt to argon2 for password hashing, and the reason was the timing attack vulnerability in our bcrypt version. # Claude automatically calls memoir_remember memoir_remember("security", "Switched from bcrypt to argon2 for password hashing due to timing attack vulnerability in bcrypt v5.0.1. Changed March 27, 2026.") Saved to security memories
Next session -- in any tool, on any machine -- that context is there.
What makes this different from CLAUDE.md files
Markdown instruction files (CLAUDE.md, GEMINI.md, .cursorrules) are great for project-level rules. But they have limits:
- They don't sync across machines unless you commit them to the repo
- They don't work across tools -- your CLAUDE.md is invisible to Cursor
- They're static -- your AI can read them but can't write back
- They don't search -- no way to query for specific context
memoir actually syncs those files too (it detects and backs up configs from 11 AI tools). But the MCP layer adds something those files can't do: your AI can search, read, and write to memory in real-time, during the conversation.
The six MCP tools
Once connected, your AI has access to:
memoir_recall Search across all saved memories memoir_remember Save context for future sessions memoir_list Browse all memory files by tool memoir_read Read a specific memory in full memoir_status See which AI tools are detected memoir_profiles Switch between work/personal profiles
Your AI decides when to call these based on conversation context. Ask about something you discussed last week? It calls memoir_recall. Make an important architecture decision? It calls memoir_remember. You don't need to prompt it explicitly -- the tools are available and the AI uses them when relevant.