Skip to main content

5 posts tagged with "AI"

Notes on assistants, language models, and machine-mediated work.

View All Tags
The Autonomy Spectrum: A Practical Framework for Deciding What to Delegate to AI

The Autonomy Spectrum: A Practical Framework for Deciding What to Delegate to AI

· 15 min read

The language around AI is drifting toward a single word: agent. Every major lab is shipping "agentic" features. Every startup pitch includes autonomous workflows. The promise is seductive — describe what you want, and the machine handles the rest.

But autonomy is not a switch. It is a spectrum. And treating it as binary — either you do the work or the AI does — leads to two symmetrical mistakes: delegating too little, leaving productivity on the table, and delegating too much, ceding judgment you cannot afford to lose.

This essay builds a practical framework for navigating the autonomy spectrum. It is not a taxonomy of AI products. It is a decision tool for deciding what to hand off, what to supervise, and what to keep — organized around a single question: what breaks if the AI gets it wrong?

Compounding Knowledge: How AI Accelerates Expertise Growth — Or Destroys It

Compounding Knowledge: How AI Accelerates Expertise Growth — Or Destroys It

· 19 min read

Most people understand compound interest in the abstract. They know that money invested early grows faster than money invested late, that the curve bends upward, that the real gains come at the end. They nod at the math. But almost nobody lives as if they believe it. They save too little, start too late, and cash out too early.

The same is true of knowledge — and the stakes are higher.

Knowledge compounds. Every concept you deeply understand becomes a hook for the next concept. Every mental model you build accelerates the construction of the next one. Every hard problem you solve strengthens the infrastructure that makes the next hard problem easier. The curve looks like compound interest because it is compound interest: the returns on learning are proportional to what you have already learned.

AI has entered this equation from two directions simultaneously — and most people are paying attention to only one of them.

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

· 16 min read

The most dangerous thing about AI in knowledge work is not that it produces mediocre output. It is that it may be destroying the mechanism by which people learn to produce excellent output.

Every organization that adopts AI for knowledge work celebrates the productivity gains. Drafts that took days now take minutes. Research that required hours now finishes in seconds. Junior analysts who used to spend their first year learning to compile data, write summaries, and structure arguments can now delegate those tasks to a model and move on to "higher-level work."

The gains are real. They are also, in one critical dimension, a trap.

The tasks being automated — the data compilation, the draft writing, the source checking, the structure building — are not just costs to be eliminated. They are the training ground on which expert judgment is built. Every senior analyst who now supervises AI output instead of junior output spent years doing the grunt work themselves. Every expert writer who now edits AI drafts learned to write by producing thousands of their own bad sentences. Every experienced researcher who now directs AI literature reviews learned what a good source looks like by reading thousands of mediocre ones.

AI is eliminating the entry-level tasks. The productivity gain is immediate and visible. The cost — a broken expertise pipeline — is delayed and invisible. But when it arrives, it will be catastrophic: organizations full of people who can prompt for output but cannot judge its quality, who can delegate to AI but cannot do the thing they are delegating, who know what the model said but not whether the model is right.

This essay is about the expertise pipeline problem: why it exists, why it is harder to solve than most people think, and what individuals and organizations can do to rebuild the path from novice to expert in an AI-augmented world.

Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less

Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less

· 21 min read

The conversation about AI and publishing has been dominated by a supply-side panic.

AI can generate articles faster than any human. It can produce passable drafts at near-zero marginal cost. It can fill blogs, populate comparison pages, and spin up entire content strategies in hours. The fear is straightforward: if content becomes cheap to produce, content producers become cheap to replace.

This fear is not wrong, but it is incomplete. It assumes that all content competes on the same axis — that the only thing readers pay for is the text itself. But readers do not pay for text. They pay for signal — for information that changes their decisions, insights they could not generate themselves, and judgment they can trust to be independently verified.

AI changes the supply of text. It does not change the supply of signal. In fact, by flooding the channel with text that resembles signal but is not, AI makes genuine signal scarcer — and therefore more valuable.

This essay is about the economics of signal scarcity: why the AI content wave does not commoditize all publishing, how it stratifies content into tiers with radically different economics, and what it means to build a publishing operation that produces signal rather than just text.

Expertise Is Not What You Know — It's What You Notice

Expertise Is Not What You Know — It's What You Notice

· 15 min read

Ask an expert and a novice to look at the same thing — a codebase, a financial statement, a set of lab results, a contract, a search ranking report — and they will describe it completely differently.

The novice sees all the same pixels. Every number is visible. Every word is legible. Nothing is hidden. And yet the novice cannot see what the expert sees, even when both are looking at the same screen.

This is not because the expert knows more facts. It is because the expert notices different patterns.

The distinction matters enormously right now, because AI is closing the fact gap at extraordinary speed. A current model can recite more facts about any domain than any human expert. It can answer more questions, cite more sources, and generate more prose. But it does not — cannot, in any straightforward sense — notice what an expert notices.

This essay is about what expertise actually does at the cognitive level, why it matters more rather than less in an AI-saturated world, and how to structure work so expertise has room to operate.