Skip to main content

14 posts tagged with "Writing"

Essays and reflections on writing, thought, and durable artifacts.

View All Tags
Structure as Signal: How Clear Writing Doubles as AI Search Optimization

Structure as Signal: How Clear Writing Doubles as AI Search Optimization

· 16 min read

The conversation about AI search has been dominated by fear.

Fear that AI overviews will steal traffic. Fear that Perplexity and ChatGPT search will render websites obsolete. Fear that optimizing for machines means writing sterile, keyword-stuffed content that pleases algorithms and repels humans.

The fear is understandable. But it misses something important: the structural qualities that make content legible to AI search engines are the same qualities that make content valuable to human readers. You do not need to choose. You need to write clearly — and that clarity is itself the optimization.

This is not a coincidence. AI search engines — Google's AI Overviews, Perplexity, ChatGPT with browsing, and the systems that follow — are ultimately designed to surface the best answer for a human. They are trained to recognize the signals that humans recognize: explicit claims, clear evidence, logical structure, defined terms, and trustworthy sourcing. When a piece of content has these qualities, both audiences find it useful. When it lacks them, both audiences leave.

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

The Expertise Pipeline: How AI Automation Breaks the Path from Novice to Expert (And How to Fix It)

· 16 min read

The most dangerous thing about AI in knowledge work is not that it produces mediocre output. It is that it may be destroying the mechanism by which people learn to produce excellent output.

Every organization that adopts AI for knowledge work celebrates the productivity gains. Drafts that took days now take minutes. Research that required hours now finishes in seconds. Junior analysts who used to spend their first year learning to compile data, write summaries, and structure arguments can now delegate those tasks to a model and move on to "higher-level work."

The gains are real. They are also, in one critical dimension, a trap.

The tasks being automated — the data compilation, the draft writing, the source checking, the structure building — are not just costs to be eliminated. They are the training ground on which expert judgment is built. Every senior analyst who now supervises AI output instead of junior output spent years doing the grunt work themselves. Every expert writer who now edits AI drafts learned to write by producing thousands of their own bad sentences. Every experienced researcher who now directs AI literature reviews learned what a good source looks like by reading thousands of mediocre ones.

AI is eliminating the entry-level tasks. The productivity gain is immediate and visible. The cost — a broken expertise pipeline — is delayed and invisible. But when it arrives, it will be catastrophic: organizations full of people who can prompt for output but cannot judge its quality, who can delegate to AI but cannot do the thing they are delegating, who know what the model said but not whether the model is right.

This essay is about the expertise pipeline problem: why it exists, why it is harder to solve than most people think, and what individuals and organizations can do to rebuild the path from novice to expert in an AI-augmented world.

The Evaluation Gap: Why Most AI-Augmented Workflows Skip the Hardest Step (And How to Fix It)

The Evaluation Gap: Why Most AI-Augmented Workflows Skip the Hardest Step (And How to Fix It)

· 14 min read

Every conversation about AI-augmented work follows the same gravity well.

Someone describes a workflow. AI generates a draft, writes code, summarizes research, translates text, analyzes data. The conversation zooms in on the generation: Which model? What prompt? How do you structure the context? Can it handle edge cases? How fast is it?

This is the generation obsession. It is everywhere. It dominates conference talks, blog posts, product demos, and internal tooling discussions. Entire careers are being built around getting better at commanding models to produce things.

And generation is important. But it is only half the problem — and arguably the easier half.

The other half is evaluation. After the model produces something, how do you know it is good? Not "looks good." Not "passed a gut check." Actually, demonstrably, measurably good. Good enough to publish, ship, decide on, or act on.

Most AI-augmented workflows skip this step. Not deliberately — most people building these workflows do not realize they are skipping anything. They look at the output, it seems fine, they move on. The evaluation happens implicitly, through casual human judgment, and nobody notices that this is where the real work is happening — or failing to happen.

This essay is about the evaluation gap: why it exists, why it matters more than most people think, and how to close it with practices that make AI-augmented work trustworthy instead of just fast.

Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less

Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less

· 21 min read

The conversation about AI and publishing has been dominated by a supply-side panic.

AI can generate articles faster than any human. It can produce passable drafts at near-zero marginal cost. It can fill blogs, populate comparison pages, and spin up entire content strategies in hours. The fear is straightforward: if content becomes cheap to produce, content producers become cheap to replace.

This fear is not wrong, but it is incomplete. It assumes that all content competes on the same axis — that the only thing readers pay for is the text itself. But readers do not pay for text. They pay for signal — for information that changes their decisions, insights they could not generate themselves, and judgment they can trust to be independently verified.

AI changes the supply of text. It does not change the supply of signal. In fact, by flooding the channel with text that resembles signal but is not, AI makes genuine signal scarcer — and therefore more valuable.

This essay is about the economics of signal scarcity: why the AI content wave does not commoditize all publishing, how it stratifies content into tiers with radically different economics, and what it means to build a publishing operation that produces signal rather than just text.

The Silent Degradation Problem: Why AI-Augmented Writing Pipelines Get Worse Over Time (And How to Stop It)

The Silent Degradation Problem: Why AI-Augmented Writing Pipelines Get Worse Over Time (And How to Stop It)

· 17 min read

Every publisher who integrates AI into their writing pipeline goes through the same early arc.

Month one: outputs are crisp, novel, and better than anything produced before. The AI catches nuances the human writer missed. It suggests angles that would have taken days of research. It turns rough notes into clean prose in seconds. The gains feel like a step change, not an incremental improvement.

Month three: something shifts. The outputs are still grammatically correct. They are still structurally sound. But they feel… familiar. The analogies start to rhyme. The sentence rhythms converge. An article about one topic reads like an article about a different topic with the nouns swapped out.

Month six: the pipeline is producing content that is technically adequate and strategically hollow. The pieces do not make arguments so much as they arrange facts into the shape of an argument. The insights — the actual, earned, non-obvious claims that make writing worth reading — are thinning out. But nobody notices, because the grammar is still perfect and the structure is still clean and the publishing cadence is still high.

This is the silent degradation problem. The pipeline does not break. It does not produce errors you can catch in review. It just slowly stops producing anything worth reading — and the very tools that caused the problem make it harder to detect, because they produce text that looks like quality without being quality.

This essay maps the four degradation vectors, why they accelerate each other, and a maintenance framework for keeping an AI-augmented writing pipeline improving instead of decaying.

The Publisher's Dilemma: Disclosing AI Use Without Losing Reader Trust

The Publisher's Dilemma: Disclosing AI Use Without Losing Reader Trust

· 20 min read

Every publisher who uses AI is sitting on a trust decision that gets harder the longer they postpone it.

The decision looks simple from the outside. Use AI? Disclose it. Don't disclose? You are hiding something. The moral calculus seems clear.

From the inside, it is messier. AI is not a binary — you used it or you did not. It is a spectrum: AI suggested the structure, AI drafted a section, AI rephrased three paragraphs, AI generated the research summary you built the argument on, AI checked your grammar, AI translated a source. Some of these feel like "using AI." Some feel like using a spellchecker. Where do you draw the line? And once you draw it, what do you tell readers without making them think the entire piece is machine-generated?

This is the publisher's dilemma. Disclose too much, and readers assume the work lacks human judgment. Disclose too little, and you build a house on sand — one exposure, one accusation, and the trust collapses.

This essay is not an argument for or against AI disclosure. It is a map of the decision landscape: what readers actually care about, the different disclosure models available, the trust accounting behind each, and a practical framework for publishers who want to be honest without undermining their own work.