Skip to main content

9 posts tagged with "Cognition"

Writing about cognitive biases, mental models, and how thinking interacts with technology.

View All Tags
Prompt Fragility: Why Your AI Workflows Break When Models Update

Prompt Fragility: Why Your AI Workflows Break When Models Update

· 11 min read

You built a workflow that works. A prompt that produces clean, structured output. A pipeline that runs daily. A system prompt that keeps the assistant on track across hundreds of interactions.

Then the model updates. Nothing dramatic — no announcement, no changelog entry that affects you. Just a quiet weight tweak in layer 37.

Your output format shifts. The structure loosens. Edge cases that were handled cleanly start leaking through. The workflow still runs — it just produces subtly worse results, and nobody notices for two weeks.

This is prompt fragility: the hidden coupling between your workflow and a specific model's behavior at a specific point in time. It is the most under-discussed risk in AI-augmented work, and it gets worse as you build more dependencies on AI output.

This essay maps why prompt fragility exists, why it compounds as you scale, and a practical resilience framework for building AI workflows that survive model changes without silent degradation.

The Autonomy Spectrum: A Practical Framework for Deciding What to Delegate to AI

The Autonomy Spectrum: A Practical Framework for Deciding What to Delegate to AI

· 15 min read

The language around AI is drifting toward a single word: agent. Every major lab is shipping "agentic" features. Every startup pitch includes autonomous workflows. The promise is seductive — describe what you want, and the machine handles the rest.

But autonomy is not a switch. It is a spectrum. And treating it as binary — either you do the work or the AI does — leads to two symmetrical mistakes: delegating too little, leaving productivity on the table, and delegating too much, ceding judgment you cannot afford to lose.

This essay builds a practical framework for navigating the autonomy spectrum. It is not a taxonomy of AI products. It is a decision tool for deciding what to hand off, what to supervise, and what to keep — organized around a single question: what breaks if the AI gets it wrong?

Decision Debt: When AI Research Produces More Options Than You Can Evaluate

Decision Debt: When AI Research Produces More Options Than You Can Evaluate

· 13 min read

Every new AI capability is sold as a way to make better decisions.

Better research. Better comparisons. Better summaries. Better recommendations. The pitch is consistent: give the AI more data, more context, more edge cases, and it will surface the right answer faster than you could on your own. The tools get better every quarter, and the pitch gets louder.

There is a quieter problem that almost nobody talks about. When you make research cheaper, you do not just get better answers. You get more questions. More leads. More options. More paths to evaluate. Every AI-assisted research session that used to take a day now takes twenty minutes — and generates five times as many things to think about.

The bottleneck shifts. It used to be finding enough information. Now it is deciding which of the things you found actually matter.

I call this decision debt: the accumulating backlog of options, leads, and research threads that your AI pipeline has surfaced but you have not yet evaluated. It compounds silently. It shows up as a feeling of being busy without making progress. And it is one of the most underdiagnosed failure modes in AI-augmented work.

Compounding Knowledge: How AI Accelerates Expertise Growth — Or Destroys It

Compounding Knowledge: How AI Accelerates Expertise Growth — Or Destroys It

· 19 min read

Most people understand compound interest in the abstract. They know that money invested early grows faster than money invested late, that the curve bends upward, that the real gains come at the end. They nod at the math. But almost nobody lives as if they believe it. They save too little, start too late, and cash out too early.

The same is true of knowledge — and the stakes are higher.

Knowledge compounds. Every concept you deeply understand becomes a hook for the next concept. Every mental model you build accelerates the construction of the next one. Every hard problem you solve strengthens the infrastructure that makes the next hard problem easier. The curve looks like compound interest because it is compound interest: the returns on learning are proportional to what you have already learned.

AI has entered this equation from two directions simultaneously — and most people are paying attention to only one of them.

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

· 16 min read

The most dangerous thing about AI in knowledge work is not that it produces mediocre output. It is that it may be destroying the mechanism by which people learn to produce excellent output.

Every organization that adopts AI for knowledge work celebrates the productivity gains. Drafts that took days now take minutes. Research that required hours now finishes in seconds. Junior analysts who used to spend their first year learning to compile data, write summaries, and structure arguments can now delegate those tasks to a model and move on to "higher-level work."

The gains are real. They are also, in one critical dimension, a trap.

The tasks being automated — the data compilation, the draft writing, the source checking, the structure building — are not just costs to be eliminated. They are the training ground on which expert judgment is built. Every senior analyst who now supervises AI output instead of junior output spent years doing the grunt work themselves. Every expert writer who now edits AI drafts learned to write by producing thousands of their own bad sentences. Every experienced researcher who now directs AI literature reviews learned what a good source looks like by reading thousands of mediocre ones.

AI is eliminating the entry-level tasks. The productivity gain is immediate and visible. The cost — a broken expertise pipeline — is delayed and invisible. But when it arrives, it will be catastrophic: organizations full of people who can prompt for output but cannot judge its quality, who can delegate to AI but cannot do the thing they are delegating, who know what the model said but not whether the model is right.

This essay is about the expertise pipeline problem: why it exists, why it is harder to solve than most people think, and what individuals and organizations can do to rebuild the path from novice to expert in an AI-augmented world.

Expertise Is Not What You Know — It's What You Notice

Expertise Is Not What You Know — It's What You Notice

· 15 min read

Ask an expert and a novice to look at the same thing — a codebase, a financial statement, a set of lab results, a contract, a search ranking report — and they will describe it completely differently.

The novice sees all the same pixels. Every number is visible. Every word is legible. Nothing is hidden. And yet the novice cannot see what the expert sees, even when both are looking at the same screen.

This is not because the expert knows more facts. It is because the expert notices different patterns.

The distinction matters enormously right now, because AI is closing the fact gap at extraordinary speed. A current model can recite more facts about any domain than any human expert. It can answer more questions, cite more sources, and generate more prose. But it does not — cannot, in any straightforward sense — notice what an expert notices.

This essay is about what expertise actually does at the cognitive level, why it matters more rather than less in an AI-saturated world, and how to structure work so expertise has room to operate.