Signal Scarcity: Why AI Content Abundance Makes Human Judgment More Valuable, Not Less
The conversation about AI and publishing has been dominated by a supply-side panic.
AI can generate articles faster than any human. It can produce passable drafts at near-zero marginal cost. It can fill blogs, populate comparison pages, and spin up entire content strategies in hours. The fear is straightforward: if content becomes cheap to produce, content producers become cheap to replace.
This fear is not wrong, but it is incomplete. It assumes that all content competes on the same axis — that the only thing readers pay for is the text itself. But readers do not pay for text. They pay for signal — for information that changes their decisions, insights they could not generate themselves, and judgment they can trust to be independently verified.
AI changes the supply of text. It does not change the supply of signal. In fact, by flooding the channel with text that resembles signal but is not, AI makes genuine signal scarcer — and therefore more valuable.
This essay is about the economics of signal scarcity: why the AI content wave does not commoditize all publishing, how it stratifies content into tiers with radically different economics, and what it means to build a publishing operation that produces signal rather than just text.
The economic inversion nobody is talking about
When you dramatically increase the supply of something, its price falls. This is Economics 101. Apply it to content, and the conclusion seems obvious: AI will flood the market with text, text will become worthless, and writers will be out of work.
But this analysis treats all content as a single commodity. It isn't. Content exists on a spectrum from pure commodity to pure signal, and AI affects different points on that spectrum differently.
Commodity content is fungible. One article about "how to reset your router" is roughly equivalent to another. The reader needs the procedure, not the prose. AI excels at this — it can generate competent commodity content faster and cheaper than any human. The economics here are brutal. If you compete on commodity content, you are competing against a machine that can produce unlimited variations at zero marginal cost. You will lose.
Signal content is the opposite of fungible. It contains non-obvious insights, verified claims, contextual judgment, and original analysis. A reader cannot substitute one signal-rich piece for another because the signal is the content — there is no generic version. AI can generate text that looks like signal — confident assertions, structured arguments, plausible-sounding analysis — but the signal is not in the words. It is in the verification chain behind them.
The inversion is this: AI does not devalue all content. It devalues commodity content and, by doing so, makes signal content more valuable. Here's why.
When commodity content was expensive to produce, there was a floor on how bad content could be. If writing a decent article took four hours, nobody would write one unless they expected at least four hours of value from it. This floor constrained supply.
AI removes that floor. Anyone can produce commodity content at near-zero cost. The result is a flood of text that is competent enough to rank, credible enough to read, and hollow enough to be worthless to anyone who actually needs to make a decision.
This flood does not make signal content less valuable. It makes it harder to find — which, in any market, makes it more valuable when you do find it. The publisher who can consistently deliver signal in a sea of commodity text does not face more competition. They face less, because readers who have been burned by commodity content become desperate for signal and willing to invest more — time, attention, trust, loyalty — when they find it.
What signal actually is (and why AI cannot generate it)
To understand why this matters, you need a clear definition of signal. Most discussions of content quality blur together things that are fundamentally different.
Signal is not style
An article can be beautifully written and contain no signal. AI can produce elegant prose, varied sentence structures, and engaging openings. Style is increasingly easy to generate. Signal is not style.
Signal is not structure
An article can be well-organized — clear sections, logical flow, helpful subheadings — and still be empty. Structure makes content easier to consume. It does not make it true, original, or useful. AI is good at structure. Structure is not signal.
Signal is not comprehensiveness
An article can cover every angle of a topic and still say nothing worth remembering. Length and coverage are inputs to usefulness, not guarantees of it. AI can generate comprehensive overviews of nearly any topic. Comprehensiveness is not signal.
Signal is verified claim density
At its core, signal is the density of claims that are (a) non-obvious, (b) independently verifiable, and (c) decision-relevant. A piece of content with high signal density changes what the reader believes, what they will do, or how they will evaluate a decision.
Consider two articles about the same topic — say, evaluating GPT offer platforms for a publishing business.
Article A describes the landscape, lists common features, explains terminology, and provides a balanced but generic comparison. It is accurate, well-written, and covers the topic thoroughly. A novice reader learns something. But an experienced publisher who has spent six months in the space would find nothing new.
Article B reports specific payout reconciliation data across five platforms over three months. It identifies a pattern where Platform X consistently reports 12% fewer attributed conversions than its peers after controlling for traffic quality. It documents the methodology, shares the normalized data, and explains how to audit for this pattern in other platforms.
Article A is commodity content. It could be generated by an AI with access to the web and a good prompt. Article B is signal content. It contains a claim — platform attribution discrepancy — that is non-obvious, verifiable (the methodology is documented, the data is shared), and decision-relevant (a publisher who switches platforms based on this pattern saves real money).
The difference between the two articles is not writing quality. It is not research effort. It is the presence of verified claims that would not exist without someone doing the work of verification.
AI can generate text that resembles verified claims. It can produce confident numbers, cite plausible sources, and construct arguments that feel evidence-based. But unless the verification actually happened — unless someone ran the test, collected the data, checked the sources, and confirmed the pattern — the signal is absent. The text is there. The value is not.
This is why publishers who actually verify things have an advantage that compounds over time. Every verified claim they publish is a signal that AI-generated content cannot replicate — not because AI is not good enough at writing, but because the signal is the verification, not the prose.
The trust gradient: why signal compounds
Signal content has an economic property that commodity content does not: it compounds.
A publisher who consistently produces signal builds a stock of trust. Each verified claim adds to the stock. Each article that proves useful to a reader increases the probability that the reader will return, will cite, will link, and will treat future claims from that publisher as credible by default.
Commodity content does not compound this way. Each commodity article is independent. It may attract a one-time visitor from search. It may answer a single query. But it does not build a relationship. The reader who finds a commodity article today has no reason to seek out the same publisher tomorrow, because commodity content is interchangeable by definition.
This compounding effect matters enormously for the economics of publishing in an AI-abundant world. Consider two publishing strategies:
Strategy A: Volume. Publish 100 AI-assisted articles per month. Each article covers a long-tail keyword, provides competent commodity content, and earns modest search traffic. After two years, you have 2,400 articles and a steady stream of visitors. But you have no audience. Each visitor arrives, consumes, and leaves. The content is good enough to rank but not good enough to remember.
Strategy B: Signal. Publish four articles per month. Each one contains verified claims, original analysis, and decision-relevant signal. After two years, you have 96 articles — but you also have a growing audience of readers who trust your work, cite it, link to it, and return to it. Each new article benefits from the trust stock built by the previous 95.
In the pre-AI era, Strategy A was constrained by production costs. Writing 100 articles per month required a large team or an unsustainable pace. AI removes that constraint, which means more publishers will pursue Strategy A — flooding the commodity tier with more supply than the market can absorb.
The publisher who pursues Strategy B does not compete in that flood. They compete in a different market entirely — the market for trust, which AI cannot enter.
The reading-side shift: how AI changes what readers value
The supply-side analysis only tells half the story. The demand side — how readers behave — is shifting in ways that amplify the signal scarcity dynamic.
Readers are developing AI literacy
Six months ago, most readers could not reliably distinguish AI-generated text from human-written text. Today, a growing share of readers can — not because AI writing has gotten worse (it has gotten better), but because readers have been exposed to enough AI-generated content that they recognize its patterns.
The tells are not grammatical errors or factual mistakes. They are subtler: the lack of a genuine voice, the absence of earned conviction, the feeling that an article is arranging facts rather than making an argument. Readers may not be able to articulate why a piece feels hollow, but they feel it. And the more AI content they encounter, the more sensitive they become to the difference.
This does not mean readers will reject all AI-assisted content. It means they will become more discriminating about which content they invest real attention in. The bar for earning reader trust rises as the background noise level rises.
Search behavior is fragmenting
Traditional search — type a query, click a blue link, read an article — is being displaced by AI-generated answers embedded directly in search results. When Google, Bing, or Perplexity summarizes the answer to a query without the user ever visiting a publisher's site, the commodity content business model breaks.
But this fragmentation affects different content tiers differently. Commodity content — the kind that answers a single, simple query — is most at risk. If AI can summarize the answer, the publisher loses the visitor. Signal content — the kind that contains non-obvious, decision-relevant claims — is harder to summarize. An AI overview cannot capture the verification methodology, the edge cases, the context that makes the signal meaningful.
The more AI search abstracts away commodity answers, the more valuable it becomes to produce content that cannot be abstracted — content where the value is in the specifics, not the summary.
Attention is becoming the binding constraint
In a world of infinite content, the scarce resource is not information. It is attention, and specifically trusted attention — the kind of attention a reader gives to a source they believe will not waste their time.
Publishers who have built a stock of trust can command this kind of attention. Publishers who have flooded the commodity tier cannot. The difference in unit economics is enormous. A commodity publisher must earn every visit through search rankings, which are increasingly zero-sum. A signal publisher earns visits through search and direct returns and word of mouth and citations and social sharing — channels that are not zero-sum because they are driven by reader choice, not algorithmic placement.
What signal-first publishing looks like in practice
Understanding the economics is useful. But what does signal-first publishing actually look like as a set of practices? Here is a framework.
1. Write from verification, not from prompts
The fundamental unit of signal-first publishing is not the article. It is the verified claim. Before you write anything, ask: what specific, non-obvious, decision-relevant insight does this piece contain that a reader could not get from an AI summary of the top ten search results?
If the answer is "nothing specific," the piece is commodity content. That does not mean you should never publish it — commodity content has its place in a content strategy. But you should publish it knowing what it is, and not confuse it with signal.
If the answer is "this specific pattern I observed, this test I ran, this data I collected, this source I verified," you have the seed of signal. Build the article around that seed. Everything else — the context, the explanation, the background — serves to make the signal legible. The signal is the reason the article exists.
2. Default to showing your work
One of the most reliable ways to distinguish signal from commodity content is whether the work is visible. An article that makes claims without showing the evidence behind them is indistinguishable from AI-generated text — because AI can also make claims without evidence.
Showing your work means different things in different domains:
- If you are analyzing data, share the dataset or at minimum the methodology.
- If you are citing a source, link to the specific page, not the homepage.
- If you are making a claim based on experience, describe the experience concretely — what you did, what you observed, what surprised you.
- If you are arguing for a framework, show it applied to a real case, not just described in the abstract.
AI can mimic all of these forms. What it cannot do is actually do the work. When you show your work, you are not just adding credibility — you are creating content that is structurally impossible for commodity AI pipelines to replicate.
3. Prioritize durability over velocity
Commodity content has a half-life. The "best VPNs of 2026" article will be stale by 2027. The "how to use ChatGPT for SEO" guide will be obsolete when the next model launches. Commodity content requires constant refreshing, which creates the maintenance asymmetry described in earlier essays on this site — more publishing creates more maintenance debt.
Signal content decays more slowly. A verified claim about a structural pattern — say, the attribution discrepancy pattern across GPT platforms — remains relevant as long as the underlying structure exists. It may need updating when the specifics change, but the insight itself is durable.
This has direct economic implications. If you publish 100 commodity articles per year, you must maintain 100 articles per year. If you publish 12 signal-rich articles per year, you maintain 12. The maintenance burden per unit of reader value is dramatically lower in a signal-first strategy.
4. Build topic depth, not topic breadth
Signal compounds within topics. Ten articles about the same domain, each containing verified claims, create a web of interconnected signal that is worth more than the sum of its parts. Readers who find one piece are drawn into the others. Search engines see topical coherence. Other publishers cite the body of work rather than individual pieces.
Commodity content strategies tend toward breadth — cover every keyword, fill every gap, be the Wikipedia of your niche. Signal strategies tend toward depth — go narrow enough that your verification effort is feasible, and go deep enough that your claims are genuinely non-obvious.
This is not a compromise. It is an optimization. The publisher who goes deep on one domain builds a trust stock that generalist publishers cannot match, because generalist publishers spread their verification effort too thin to produce signal at competitive density.
5. Treat editing as the primary value-add, not drafting
In a commodity content workflow, drafting is the bottleneck. You need text, and AI can produce text, so AI does the drafting and humans do the editing. This makes sense for commodity content.
In a signal-first workflow, the bottleneck is not drafting. It is verification. The hard part is not writing the article — it is doing the work that makes the article worth writing. This work happens before drafting begins: running the test, collecting the data, checking the sources, forming the judgment.
In this workflow, AI's role shifts. It does not replace the verification work — it cannot. Instead, it assists with the parts that surround the verification: structuring the argument, polishing the prose, suggesting counterarguments, identifying gaps in the evidence chain. The human does the signal work. The AI does the text work. The result is an article where the signal-to-text ratio is high because the human invested effort in the dimension AI cannot reach.
The objection: "But I don't have unique data or original research"
This is the most common pushback against signal-first publishing: not everyone has access to proprietary data, lab equipment, or an audience to survey. If signal requires original verification, doesn't that exclude most publishers?
The objection misunderstands what counts as verification. You do not need a dataset. You need to do something that AI — and most other publishers — do not do.
Here are forms of verification that require no special access:
- Synthesis across sources that don't cite each other. Find three credible sources that address a topic from different angles, identify the pattern they converge on, and articulate it. The verification is the cross-referencing — not a single source, but the pattern that emerges across them.
- Testing claims against personal experience. A marketing blog claims that Strategy X improves conversion rates by 30%. You try Strategy X on your own site for six weeks and report what happened. Your result is a single data point, not a study, but it is verified experience — and it is more signal than a thousand articles repeating the claim.
- Interviewing practitioners. Most AI-generated content is built from public web sources. If you spend an hour on the phone with someone who does the work, you will surface insights that have never been written down. The verification is not in the methodology — it is in the access.
- Auditing claims made by others. An industry report claims that "70% of companies are adopting AI for content." Where does that number come from? What was the sample? Who funded the survey? Audit the claim. Publish what you find. The verification is the audit itself.
- Historical pattern matching. A current trend — say, AI-generated SEO content flooding search results — resembles a past trend — say, content farms in 2011. Map the parallels. Identify the structural similarities and the differences. The verification is in the historical research.
None of these require a research budget. They require time, attention, and the willingness to do work that cannot be delegated to a prompt. That is the moat. The barrier to signal is not access to resources. It is willingness to do the verification that most publishers skip.
The risk: what happens if you bet wrong
No publishing strategy is risk-free. A signal-first strategy carries its own risks, and it is worth naming them explicitly.
Risk 1: Slow compounding
Signal content takes time to build momentum. A single verified-claim article may attract very little attention for months — until someone with influence cites it, or until the topic becomes timely, or until the search algorithm decides the site is authoritative enough to rank for competitive queries.
During that waiting period, a commodity strategy looks better. It produces more content, more traffic, more visible activity. The publisher pursuing signal must have the conviction — and the financial runway — to tolerate slow early returns.
Risk 2: Verification errors
When you publish verified claims, you stake your reputation on their accuracy. A single high-profile error can damage the trust stock you have spent years building. Commodity content carries lower reputational risk because fewer readers invest enough attention to notice errors, and fewer errors are consequential enough to matter.
Signal publishing requires robust error-correction practices: clear corrections policies, willingness to update articles when facts change, and transparency about the limits of your verification. The trust you build is only as durable as your willingness to admit when you are wrong.
Risk 3: The AI verification gap may narrow
Today, AI cannot independently verify claims — it can only generate text that resembles verified claims. But this gap may narrow over time. AI systems may gain the ability to run experiments, audit sources, and test hypotheses.
If that happens, the advantage of human-performed verification would shrink. But even in that scenario, two advantages remain. First, verification requires action in the world — running tests, talking to people, observing outcomes — that AI cannot perform without physical agency. Second, the trust relationship between publisher and reader depends on accountability, not just verification. A reader trusts a publisher not just because the publisher's claims are accurate, but because the publisher has a reputation to protect. AI has no reputation to protect. It cannot be held accountable.
The moat is not just the verification. It is the skin in the game.
FAQ
Doesn't AI make it easier for everyone to produce signal, not just commodity content?
AI makes it easier to produce text that resembles signal. It does not make it easier to produce actual signal, because actual signal requires verification that AI cannot perform. A researcher who uses AI to help structure an article about their original experiment produces signal. A publisher who uses AI to generate an article that claims to report an experiment that never happened produces commodity content in signal's clothing. The difference is the verification, not the tool.
What if my niche is too narrow for original research?
Narrow niches are often better for signal, not worse. In a narrow niche, the bar for "non-obvious" is lower because fewer people are doing the work. A verified claim about a niche topic is more valuable than a verified claim about a broad topic, because fewer alternatives exist.
How do I balance signal content with commodity content for SEO?
Commodity content has a legitimate role in a content strategy — it captures long-tail search traffic, establishes topical breadth, and provides entry points for new readers. The mistake is treating commodity content as the core of the strategy. Use commodity content to fill gaps and capture traffic. Use signal content to build trust, earn links, and create the body of work that makes the site worth returning to. The ratio depends on your resources, but a useful heuristic: at least half your publishing effort should go to signal.
How do I know if my content is signal or commodity?
Apply the substitution test: if a reader could get the same value from an AI-generated summary of the top five search results, your content is commodity. If your content contains at least one non-obvious, verified, decision-relevant claim that would not exist without your specific work, it contains signal.
The durable bet
The AI content wave is not going to recede. Every month, models get better at generating text that passes surface-level quality checks. The cost of producing commodity content will continue to fall. The flood of plausible-but-hollow text will continue to rise.
In that environment, the publishers who thrive will not be the ones who produce the most content, or the ones who optimize their AI pipelines most aggressively, or the ones who find the cleverest prompts. They will be the ones who produce the most signal per unit of reader attention — the ones whose content earns trust because it is built on verification, not generation.
This is, in a sense, good news. It means the market is not rewarding the publishers with the best AI tools. It is rewarding the publishers who do the work that AI cannot do. The work — verification, judgment, synthesis, accountability — is hard, slow, and expensive in human attention. It has always been hard, slow, and expensive. AI has not changed that. It has only made it more valuable.
The bet is simple: in a world of infinite text, signal is the only asset that compounds. Everything else is just filling the channel.