Skip to main content

21 posts tagged with "Workflow"

Process notes about capture, editing, and publishing.

View All Tags
Writing to Think vs. Prompting to Receive: Why the Medium Shapes the Mind

Writing to Think vs. Prompting to Receive: Why the Medium Shapes the Mind

· 14 min read

AI can now write better than most people, faster than any person, on almost any topic you name.

This is not hyperbole. Give a current model a topic, an audience, a tone, and a structure — it will produce prose that is clear, coherent, and factually adequate. It will do in fifteen seconds what might take a skilled writer two hours.

The natural conclusion — the one increasingly adopted in workplaces, classrooms, and content operations — is that writing is becoming a delegation task. You think about what you want to say. The AI says it. You review and ship.

This conclusion is wrong.

Not because AI writes poorly. Because the act of writing itself is a thinking process that prompting cannot replace. When you delegate writing to AI, you are not just delegating the production of text. You are delegating the cognitive work that writing performs — and that work is the source of most of writing's value.

This essay is about the difference between writing to think and prompting to receive, why the distinction matters, and how to build both into a workflow that makes you smarter rather than just faster.

The Half-Life of Notes: Why Your Second Brain Decays and What to Do About It

The Half-Life of Notes: Why Your Second Brain Decays and What to Do About It

· 16 min read

Most second brains are built with ambition and abandoned with silence.

The pattern is familiar. You discover a note-taking system — Obsidian, Notion, Logseq, a folder of Markdown files. You read about Zettelkasten, PARA, or evergreen notes. You capture diligently for weeks or months. The system fills up. It feels productive.

Then, somewhere around month six, you notice something. You open a note from three months ago and realize you no longer know what it means. The context has evaporated. The source link is dead. The half-formed thought it captured is no longer half-formed — it is just dead.

The system did not fail because you stopped adding to it. It failed because notes have a half-life, and most knowledge systems are designed for accumulation, not maintenance.

This essay is about knowledge entropy — the quiet forces that cause personal knowledge systems to lose value over time — and the deliberate practices that keep a system alive across years, not months.

Builder's Knowledge: Why Shipping Teaches You What Research Cannot

Builder's Knowledge: Why Shipping Teaches You What Research Cannot

· 13 min read

There is a knowledge gap that AI tools are making wider, not narrower.

It is not the gap between experts and beginners. It is not the gap between people who read and people who do not.

It is the gap between knowing about something and knowing from something. Between analytical understanding and operational understanding. Between the knowledge you can acquire by reading and the knowledge you can only earn by building.

AI tools collapse this distinction. They summarize documentation, synthesize research, and explain complex systems with fluency. After a few hours with an AI research assistant, you can feel like you understand how a system works — its architecture, its failure modes, its design trade-offs — without ever having run it.

But that feeling is incomplete. It skips an entire category of knowledge that only comes from shipping and maintaining something real.

This essay is about what that category contains, why it matters, and how to build systems that force you to earn it.

The Calibration Gym: Why You Need to Practice Thinking Without AI

The Calibration Gym: Why You Need to Practice Thinking Without AI

· 11 min read

There is a skill that deteriorates quietly when AI tools become your default thinking partner.

It is not writing ability. It is not research speed. It is not even critical thinking — at least, not directly.

The skill is cognitive calibration: your internal sense of how well you understand something, how confident you should be in a conclusion, and how much effort a problem actually requires.

When you think alongside AI every day, this calibration drifts. The drift is slow. It does not announce itself. And by the time you notice, you have already lost the ability to judge your own judgment.

This essay is about what calibration drift looks like, what it costs in practice, and how to build deliberate thinking practice — a calibration gym — into an AI-augmented workflow.

The Real Job of an AI Research Assistant

The Real Job of an AI Research Assistant

· 3 min read

The obvious use of an AI research assistant is summarization. Give it a document, get the bullet points, move on. That is useful, but it is not the real job.

The real job is continuity. A good research assistant should remember what has already been decided, keep track of open threads, notice when new material conflicts with old assumptions, and help turn scattered inputs into durable output.