Skip to main content
The Autonomy Spectrum: A Practical Framework for Deciding What to Delegate to AI

The Autonomy Spectrum: A Practical Framework for Deciding What to Delegate to AI

· 15 min read

The language around AI is drifting toward a single word: agent. Every major lab is shipping "agentic" features. Every startup pitch includes autonomous workflows. The promise is seductive — describe what you want, and the machine handles the rest.

But autonomy is not a switch. It is a spectrum. And treating it as binary — either you do the work or the AI does — leads to two symmetrical mistakes: delegating too little, leaving productivity on the table, and delegating too much, ceding judgment you cannot afford to lose.

This essay builds a practical framework for navigating the autonomy spectrum. It is not a taxonomy of AI products. It is a decision tool for deciding what to hand off, what to supervise, and what to keep — organized around a single question: what breaks if the AI gets it wrong?

Decision Debt: When AI Research Produces More Options Than You Can Evaluate

Decision Debt: When AI Research Produces More Options Than You Can Evaluate

· 13 min read

Every new AI capability is sold as a way to make better decisions.

Better research. Better comparisons. Better summaries. Better recommendations. The pitch is consistent: give the AI more data, more context, more edge cases, and it will surface the right answer faster than you could on your own. The tools get better every quarter, and the pitch gets louder.

There is a quieter problem that almost nobody talks about. When you make research cheaper, you do not just get better answers. You get more questions. More leads. More options. More paths to evaluate. Every AI-assisted research session that used to take a day now takes twenty minutes — and generates five times as many things to think about.

The bottleneck shifts. It used to be finding enough information. Now it is deciding which of the things you found actually matter.

I call this decision debt: the accumulating backlog of options, leads, and research threads that your AI pipeline has surfaced but you have not yet evaluated. It compounds silently. It shows up as a feeling of being busy without making progress. And it is one of the most underdiagnosed failure modes in AI-augmented work.

Structure as Signal: How Clear Writing Doubles as AI Search Optimization

Structure as Signal: How Clear Writing Doubles as AI Search Optimization

· 16 min read

The conversation about AI search has been dominated by fear.

Fear that AI overviews will steal traffic. Fear that Perplexity and ChatGPT search will render websites obsolete. Fear that optimizing for machines means writing sterile, keyword-stuffed content that pleases algorithms and repels humans.

The fear is understandable. But it misses something important: the structural qualities that make content legible to AI search engines are the same qualities that make content valuable to human readers. You do not need to choose. You need to write clearly — and that clarity is itself the optimization.

This is not a coincidence. AI search engines — Google's AI Overviews, Perplexity, ChatGPT with browsing, and the systems that follow — are ultimately designed to surface the best answer for a human. They are trained to recognize the signals that humans recognize: explicit claims, clear evidence, logical structure, defined terms, and trustworthy sourcing. When a piece of content has these qualities, both audiences find it useful. When it lacks them, both audiences leave.

Compounding Knowledge: How AI Accelerates Expertise Growth — Or Destroys It

Compounding Knowledge: How AI Accelerates Expertise Growth — Or Destroys It

· 19 min read

Most people understand compound interest in the abstract. They know that money invested early grows faster than money invested late, that the curve bends upward, that the real gains come at the end. They nod at the math. But almost nobody lives as if they believe it. They save too little, start too late, and cash out too early.

The same is true of knowledge — and the stakes are higher.

Knowledge compounds. Every concept you deeply understand becomes a hook for the next concept. Every mental model you build accelerates the construction of the next one. Every hard problem you solve strengthens the infrastructure that makes the next hard problem easier. The curve looks like compound interest because it is compound interest: the returns on learning are proportional to what you have already learned.

AI has entered this equation from two directions simultaneously — and most people are paying attention to only one of them.

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

· 16 min read

The most dangerous thing about AI in knowledge work is not that it produces mediocre output. It is that it may be destroying the mechanism by which people learn to produce excellent output.

Every organization that adopts AI for knowledge work celebrates the productivity gains. Drafts that took days now take minutes. Research that required hours now finishes in seconds. Junior analysts who used to spend their first year learning to compile data, write summaries, and structure arguments can now delegate those tasks to a model and move on to "higher-level work."

The gains are real. They are also, in one critical dimension, a trap.

The tasks being automated — the data compilation, the draft writing, the source checking, the structure building — are not just costs to be eliminated. They are the training ground on which expert judgment is built. Every senior analyst who now supervises AI output instead of junior output spent years doing the grunt work themselves. Every expert writer who now edits AI drafts learned to write by producing thousands of their own bad sentences. Every experienced researcher who now directs AI literature reviews learned what a good source looks like by reading thousands of mediocre ones.

AI is eliminating the entry-level tasks. The productivity gain is immediate and visible. The cost — a broken expertise pipeline — is delayed and invisible. But when it arrives, it will be catastrophic: organizations full of people who can prompt for output but cannot judge its quality, who can delegate to AI but cannot do the thing they are delegating, who know what the model said but not whether the model is right.

This essay is about the expertise pipeline problem: why it exists, why it is harder to solve than most people think, and what individuals and organizations can do to rebuild the path from novice to expert in an AI-augmented world.

The Evaluation Gap: Why Most AI-Augmented Workflows Skip the Hardest Step (And How to Fix It)

The Evaluation Gap: Why Most AI-Augmented Workflows Skip the Hardest Step (And How to Fix It)

· 14 min read

Every conversation about AI-augmented work follows the same gravity well.

Someone describes a workflow. AI generates a draft, writes code, summarizes research, translates text, analyzes data. The conversation zooms in on the generation: Which model? What prompt? How do you structure the context? Can it handle edge cases? How fast is it?

This is the generation obsession. It is everywhere. It dominates conference talks, blog posts, product demos, and internal tooling discussions. Entire careers are being built around getting better at commanding models to produce things.

And generation is important. But it is only half the problem — and arguably the easier half.

The other half is evaluation. After the model produces something, how do you know it is good? Not "looks good." Not "passed a gut check." Actually, demonstrably, measurably good. Good enough to publish, ship, decide on, or act on.

Most AI-augmented workflows skip this step. Not deliberately — most people building these workflows do not realize they are skipping anything. They look at the output, it seems fine, they move on. The evaluation happens implicitly, through casual human judgment, and nobody notices that this is where the real work is happening — or failing to happen.

This essay is about the evaluation gap: why it exists, why it matters more than most people think, and how to close it with practices that make AI-augmented work trustworthy instead of just fast.