Skip to main content

7 posts tagged with "LLM Wiki"

Notes and writing about LLM-maintained knowledge systems.

View All Tags
Compiled, Not Retrieved: Why Pre-Built Knowledge Is the Real AI Advantage

Compiled, Not Retrieved: Why Pre-Built Knowledge Is the Real AI Advantage

· 12 min read

Everyone building with AI is chasing the same thing: give the model better context so it gives better answers.

The race has a clear frontrunner. Retrieval-augmented generation — RAG — is the default answer. You store your documents, embed them, and at query time you retrieve the most relevant chunks to stuff into the prompt. The model gets context. The answer improves. The architecture seems solved.

But there is a second architecture that gets far less attention, even though it produces better long-term results in the domains that matter most: personal knowledge, research, decision support, and publishing.

That architecture is compilation — doing the knowledge work ahead of time so the context the model receives is not raw retrieved fragments but structured, reviewed, and maintained interpretation.

The difference between these two architectures is not a technical detail. It is a strategic fork that determines whether your AI-augmented knowledge system gets smarter over time or plateaus at retrieval quality.

Why a Private Wiki Beats Note Chaos

Why a Private Wiki Beats Note Chaos

· One min read

A lot of people collect notes. Far fewer maintain a knowledge system.

The difference is not how much you save. It is whether the material keeps becoming more useful over time.

Launching Cherry Brain

Launching Cherry Brain

· One min read

Cường Nghiêm starts with a simple idea: keep raw knowledge private, let the wiki compound over time, and only publish the parts worth sharing.

This site is the public layer of that system.