Blog

Engineering notes.

Benchmarks, architecture writeups, and the occasional opinionated rant. Written by the people who built the software, for people who want to understand how it actually works.

Engineering April 4, 2026 8 min read
From Engine to Ecosystem: Mnemosyne in Claude Code and Local LLMs

A retrieval engine is only useful if it's where you work. Mnemosyne now ships as three PyPI packages: the engine, an MCP server for Claude Code and Cursor, and a zero-config Ollama bridge for fully local code search. Same engine, three delivery vectors, zero cloud required.

Engineering April 2, 2026 10 min read
How We Designed Mnemosyne: Six Retrieval Signals, One Engine, and Why Architecture Matters

Most LLM code retrieval tools rely on one or two signal types. Mnemosyne fuses six — BM25, TF-IDF, symbol matching, usage frequency, predictive prefetch, and optional embeddings — through Reciprocal Rank Fusion. The architecture, the publication timeline, and a feature-by-feature comparison.

Engineering April 2, 2026 7 min read
Semantic Retrieval vs Grep: 2.4x Faster, 5.6x Fewer Tokens

Same question, same codebase, same LLM — two search strategies. Mnemosyne semantic retrieval was 2.4x faster, 5.6x fewer tokens, and 4.2x cheaper with equivalent answer quality. A head-to-head benchmark with full cost analysis.

Engineering March 28, 2026 6 min read
How We Cut LLM Context Waste by 74% — and Open-Sourced the Tool

We built Mnemosyne to stop burning tokens on irrelevant code. A real benchmark against an 844-file production codebase shows 74% token reduction, 99% faster queries, and equivalent answer quality. Zero dependencies. Apache-2.0.

Case Study March 19, 2026 8 min read
Building for Both Sides of the Compliance Conversation

Regulations define what companies must do. Privacy policies describe what companies say they'll do. This case study describes how we built RuleBrief and PrivacyPeep to address both sides of that gap — and the shared design principles behind explainability, local-first processing, and published methodology.