The Publisher's Dilemma: Disclosing AI Use Without Losing Reader Trust
Every publisher who uses AI is sitting on a trust decision that gets harder the longer they postpone it.
The decision looks simple from the outside. Use AI? Disclose it. Don't disclose? You are hiding something. The moral calculus seems clear.
From the inside, it is messier. AI is not a binary — you used it or you did not. It is a spectrum: AI suggested the structure, AI drafted a section, AI rephrased three paragraphs, AI generated the research summary you built the argument on, AI checked your grammar, AI translated a source. Some of these feel like "using AI." Some feel like using a spellchecker. Where do you draw the line? And once you draw it, what do you tell readers without making them think the entire piece is machine-generated?
This is the publisher's dilemma. Disclose too much, and readers assume the work lacks human judgment. Disclose too little, and you build a house on sand — one exposure, one accusation, and the trust collapses.
This essay is not an argument for or against AI disclosure. It is a map of the decision landscape: what readers actually care about, the different disclosure models available, the trust accounting behind each, and a practical framework for publishers who want to be honest without undermining their own work.