Skip to main content

16 posts tagged with "Publishing"

Notes on turning private work into public writing.

View All Tags
Content Decay in Comparison Publishing: Why Your Best Articles Quietly Stop Performing

Content Decay in Comparison Publishing: Why Your Best Articles Quietly Stop Performing

· 10 min read

You published a strong comparison article. It ranked. It earned traffic. It converted readers into clicks, sign-ups, or affiliate actions.

Six months later, the pageview chart looks fine. But something is wrong.

Fewer conversions per visit. More bounces from search. Reader emails asking questions your article already answers — except the answer is now outdated.

This is content decay, and in comparison publishing it moves faster and costs more than in almost any other content vertical.

This essay maps why comparison content decays, the six vectors that drive it, why standard analytics hide the damage, and a practical quarterly audit framework to catch it before revenue erodes.

How to Write Comparison Content That AI Search Can't Replace

How to Write Comparison Content That AI Search Can't Replace

· 12 min read

AI search is eating comparison content.

Ask any modern search engine — Perplexity, Google AI Overviews, ChatGPT with browsing — "which X is best?" and you get a synthesized answer. It pulls features, prices, ratings, and pros/cons from across the web, combines them into a tidy paragraph or table, and presents the result as conclusive. No clicking through. No reading your article. Your comparison page becomes a data source, not a destination.

Most comparison content deserves this fate. The average "X vs Y" article follows a formula: grab product descriptions from official sites, list features in a table, add a verdict that hedges every conclusion, and slap an affiliate link at the bottom. There is no first-hand experience. No original testing. No evidence that the author has actually used both products under real conditions. The content is aggregatable because it is itself an aggregation.

If you publish comparison content, this essay will help you understand why most of it is replaceable and how to make yours the kind of content that AI search summarizes but cannot replace — because the value lives in evidence, methodology, and judgment that no summary preserves.

Structure as Signal: How Clear Writing Doubles as AI Search Optimization

Structure as Signal: How Clear Writing Doubles as AI Search Optimization

· 16 min read

The conversation about AI search has been dominated by fear.

Fear that AI overviews will steal traffic. Fear that Perplexity and ChatGPT search will render websites obsolete. Fear that optimizing for machines means writing sterile, keyword-stuffed content that pleases algorithms and repels humans.

The fear is understandable. But it misses something important: the structural qualities that make content legible to AI search engines are the same qualities that make content valuable to human readers. You do not need to choose. You need to write clearly — and that clarity is itself the optimization.

This is not a coincidence. AI search engines — Google's AI Overviews, Perplexity, ChatGPT with browsing, and the systems that follow — are ultimately designed to surface the best answer for a human. They are trained to recognize the signals that humans recognize: explicit claims, clear evidence, logical structure, defined terms, and trustworthy sourcing. When a piece of content has these qualities, both audiences find it useful. When it lacks them, both audiences leave.

The Evaluation Gap: Why Most AI-Augmented Workflows Skip the Hardest Step (And How to Fix It)

The Evaluation Gap: Why Most AI-Augmented Workflows Skip the Hardest Step (And How to Fix It)

· 14 min read

Every conversation about AI-augmented work follows the same gravity well.

Someone describes a workflow. AI generates a draft, writes code, summarizes research, translates text, analyzes data. The conversation zooms in on the generation: Which model? What prompt? How do you structure the context? Can it handle edge cases? How fast is it?

This is the generation obsession. It is everywhere. It dominates conference talks, blog posts, product demos, and internal tooling discussions. Entire careers are being built around getting better at commanding models to produce things.

And generation is important. But it is only half the problem — and arguably the easier half.

The other half is evaluation. After the model produces something, how do you know it is good? Not "looks good." Not "passed a gut check." Actually, demonstrably, measurably good. Good enough to publish, ship, decide on, or act on.

Most AI-augmented workflows skip this step. Not deliberately — most people building these workflows do not realize they are skipping anything. They look at the output, it seems fine, they move on. The evaluation happens implicitly, through casual human judgment, and nobody notices that this is where the real work is happening — or failing to happen.

This essay is about the evaluation gap: why it exists, why it matters more than most people think, and how to close it with practices that make AI-augmented work trustworthy instead of just fast.

Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less

Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less

· 21 min read

The conversation about AI and publishing has been dominated by a supply-side panic.

AI can generate articles faster than any human. It can produce passable drafts at near-zero marginal cost. It can fill blogs, populate comparison pages, and spin up entire content strategies in hours. The fear is straightforward: if content becomes cheap to produce, content producers become cheap to replace.

This fear is not wrong, but it is incomplete. It assumes that all content competes on the same axis — that the only thing readers pay for is the text itself. But readers do not pay for text. They pay for signal — for information that changes their decisions, insights they could not generate themselves, and judgment they can trust to be independently verified.

AI changes the supply of text. It does not change the supply of signal. In fact, by flooding the channel with text that resembles signal but is not, AI makes genuine signal scarcer — and therefore more valuable.

This essay is about the economics of signal scarcity: why the AI content wave does not commoditize all publishing, how it stratifies content into tiers with radically different economics, and what it means to build a publishing operation that produces signal rather than just text.

Tool Independence: How to Build Knowledge Systems That Outlast Any AI Platform

Tool Independence: How to Build Knowledge Systems That Outlast Any AI Platform

· 16 min read

Every few months, the AI platform landscape shifts.

A new model launch resets the performance ceiling. A pricing change breaks your cost model. A product pivot deprecates the feature your workflow depends on. A startup you integrated deeply into your stack runs out of funding and goes quiet. An incumbent adds a capability that makes your specialized tool redundant overnight.

This is not a temporary phase. It is the permanent condition of building knowledge work on top of AI infrastructure that is still being invented in real time. The platforms are fluid. The APIs change. The tools that feel indispensable today will feel dated in eighteen months — and the ones that will replace them have not been built yet.

Most of the conversation about this volatility focuses on picking winners. Which model will dominate? Which platform has the best roadmap? Which startup has the strongest team? The implicit assumption is that if you pick well enough, you can hitch your workflow to the right horse and ride it into the future.

This assumption is wrong. Not because platform picking is impossible — but because it frames the problem incorrectly. The question is not which platform will win. The question is how to build knowledge infrastructure that does not care which platform wins.

This essay is about the architecture of tool-independent knowledge systems: what they look like, why they are difficult to build, and why they are the most underrated advantage in AI-augmented work.