ember-memory adds automatic retrieval, shared collections, and adaptive ranking so your AI can pick up where you left off across sessions and tools.
Native hook integrations pull relevant context into Claude Code, Gemini CLI, and Codex before each response. No more rebuilding project history by hand.
Use one shared memory across multiple AI tools, or keep private namespaces when you want separation. Switch models without losing the thread.
Retrieval gets better as you work. Similarity is blended with heat, connections, and freshness so active context rises naturally over stale notes.
Organize architecture notes, debugging logs, personal preferences, worldbuilding, or imported chats without turning memory into a junk drawer.
Ingest docs, notes, and exported chats, then generate compact handoff packets so another AI can continue the work with context intact.
Run with Ollama and local storage for a private setup, or swap in cloud embeddings and other vector backends when your workflow needs them.
Install it, launch it from your app menu or Start Menu, manage providers and collections, test hooks, and let it live quietly in the system tray.
The controller installs a proper launcher, uses a tray icon for quick access, and keeps a single running instance instead of spawning duplicate windows.
Use local Ollama by default or add OpenAI, Google, and OpenRouter keys when you want cloud embeddings or model lists.
The CLI Status view detects Claude Code, Gemini CLI, and Codex, installs hooks, and includes self-tests so you can confirm retrieval is firing.
Collections, provider settings, and memory data stay on your machine by default. You choose when external providers enter the loop.
It starts with coding workflows, but the same memory layer helps anywhere long-term context matters.
Keep architecture decisions, debugging history, and project context available across sessions so your AI can return to the work without a long warm-up.
Use Claude for architecture, Gemini for exploration, Codex for execution, and keep shared project memory moving with you instead of staying trapped in one tool.
Store lore, character notes, style guides, and prior scenes so creative sessions build on what already exists instead of resetting tone and continuity.
Preferences, reflections, recurring tasks, and long-term notes can stay searchable and reusable without being crammed into a single context window.
Self-host it. Inspect it. Use the whole thing. If teams later want support or custom integration help, that can be a separate conversation.
$0
MIT-licensed and local-first. All core memory features are available without a hosted plan or platform lock-in.
Optional
If your team wants help with deployment, onboarding, or commercial integration work, reach out. The product itself does not depend on a paid tier.
Open the repo, follow the install guide, launch the desktop app, then use CLI Status to install and test retrieval hooks.