Revised on 2026-04-28 to v1.1. See revision history below.

Look at any AI-skeptic feed in 2026 and you’ll see the word slop doing heavy work. It names something real: low-entropy, mass-produced text without an author behind it, flooding feeds, search results, comment sections, product reviews. There’s now a small genre of essays explaining why this is bad for civilization. Some of them are excellent. Some of them are slop themselves.

I want to ask a different question. Not whether AI slop is bad — clearly some of it is — but why we’re so confident we can recognize it. Because if you squint at a lot of professional life, much of what we produce on a normal Tuesday already qualifies. Legal boilerplate. Corporate memo-speak. Quarterly reports that survive only because nobody reads them. Status updates that say nothing. Standardized medical notes whose function is mostly forensic.

It is mass-produced, low-entropy, formulaic text. It is defended on grounds of compliance, coverage, due diligence. And it is everywhere.

So: what’s the difference?

I. The case that they’re the same

Run a paragraph of corporate boilerplate through a compression algorithm. Run the average AI-generated marketing blog post through the same algorithm. The compression ratios are not far off. Both are, in information-theoretic terms, low-entropy: predictable, redundant, fillable from templates.

And both have the same surface: a sequence of formally correct sentences arranged in the formally expected order, indistinguishable from prose, fundamentally not communicating anything new.

Read the average non-disclosure agreement. Read the average commencement speech. Read the boilerplate “we take your privacy seriously” email after a data breach. Read the disclosure section of a financial filing. They are not communicating. They are fulfilling.

If I told you those four artifacts had been generated by GPT-9, you might believe me. The reason you’d hesitate is not that they look more authored than AI slop. It’s that you already know who fulfilled them, and you’re used to the idea that this is what their job is.

II. Three distinctions, and what they used to do

Three things have historically separated bureaucratic slop from AI slop.

A. Accountability — skin in the game. A lawyer signs the boilerplate. An accountant attests to the spreadsheet. A doctor’s signature is on the discharge summary. Slop, by contrast, is characterized by the removal of accountability: nobody is responsible for the truth of the output. When AI generates a thousand fake product reviews, no one is on the hook. The artifact is mass-produced precisely because no one had to stand behind it.

This was the strongest distinction. It is also the first to erode. The moment a lawyer pastes an LLM’s output into a brief without checking — and the appellate cases are now legion — the signature is still there but the skin is gone. The artifact has not changed; the liability behind it has.

B. Intent — utility versus capture. Professional work is usually utility-driven: solve a problem, satisfy a requirement, convey specific information to someone who needs it. AI slop is capture-driven: flood feeds, hijack SEO, take up shelf space in the search index, harvest attention or clicks. Professional work tries to solve a task. Slop tries to win a volume game.

This distinction also holds — until you notice that a good portion of professional work is itself a volume game. Quarterly performance reports for a department nobody runs. Compliance documentation for an audit nobody reads. Slide decks built to exist, not to be understood. The intent had already drifted; the AI didn’t drift it.

C. Value-to-noise ratio. Templates exist to make signal findable — a fixed frame so the reader can locate the new information quickly. The boilerplate parts of a contract are skipped on purpose; the negotiated clauses are where attention lands. In this view, the slop enables the signal: it does the cognitive work of making the rest legible.

Slop, by contrast, mimics the appearance of information without delivering any. It costs the reader more to filter than the writer to produce. That asymmetry is what makes it bad.

This is the most defensible of the three distinctions. It is also where things get interesting.

III. The tipping point: when professionalism becomes slop

The line where professional work becomes slop is not subtle. It tips when the cost of producing the artifact drops below the value of the information it carries.

A doctor using an LLM to summarize one patient’s history into a clean discharge note: efficiency. The clinician’s attention has been freed for the patient.

A doctor using an LLM to generate a thousand discharge notes to satisfy a billing audit: slop. The artifact has been decoupled from the underlying reality it claimed to describe.

The same tool. The same template. The same surface. The difference is whether the artifact is downstream of attention, or upstream of fraud.

This is the part of the argument that should be making us nervous. The cost of producing professional artifacts has now dropped, in many fields, to near zero. The artifacts have not changed. What changed is whether anyone is still upstream of them.

This is what Cory Doctorow has been calling enshittification at platform scale: the moment incentives flip and the producer’s interest in the artifact decouples from the reader’s. The slop tipping point is enshittification compressed into a single workflow — the producer no longer has any reason to care whether the document is read, because producing it costs nothing. Once that asymmetry takes hold, the artifact stops being a service to the reader and starts being a debt collected from them.

IV. The inversion: maybe AI is removing the inflammation

Now the uncomfortable reading.

Maybe the right way to read what’s happening in 2026 is not that AI introduced slop. Maybe AI revealed slop, by removing the friction that hid it.

A lot of what we called “professional work” was always low-information ritual — boilerplate maintained because somebody had to type it, status reports written because somebody had to attend the meeting, decks built because somebody had to fill the calendar. The expensive friction of producing this material was, paradoxically, the only thing that gave it the appearance of meaning. We didn’t see the slop because we were too busy generating it.

Strip the friction away and the slop is naked. The pages are still there; the stamps are still there; the templates are still there; only the human cost has gone. And what we are now staring at — and calling AI slop, with offended outrage — is, in many cases, the exact same artifact our own profession was producing all along. Graeber was nearly twenty years too early to use the word slop, but Bullshit Jobs is the book about this category of work, and his diagnosis turns out to have been preparing the ground for what AI is now revealing.

This has a sharp corollary. If AI is fat-trimming, the moral panic is partly grief. The fat was a job. The inflammation was a career. We are watching a market discover that a great deal of what it paid for was the experience of someone fulfilling the form, not the form itself. That experience is gone, and the form, finally, is being read on its own merits — which are, often, none.

The honest question

I don’t think this argument fully exonerates AI slop. The fake reviews, the SEO spam, the engagement-bait articles, the regulatory comment letters generated by the million — those are real harms by any standard, and the people downstream are not professionals exercising judgment, they are readers being attacked. Harry Frankfurt’s On Bullshit was already the philosophical predecessor of this complaint: language indifferent to truth, deployed for effect. AI didn’t invent bullshit; it industrialized it. That’s a real harm.

But the argument forces a question we’d rather not ask. Inside our own work — your work, my work, the work of whatever guild we belong to — how much of what we produce on a normal Tuesday would survive the test we are happily applying to AI slop?

If the answer is most of it, the panic is righteous and useful.

If the answer is honestly, not that much, then the panic is also a confession. We are watching a machine do the job we were already doing, slightly faster, slightly cheaper, with the same value-to-noise ratio. And the part of us that flinches is the part that knew.

Revision history

  • v1.1 — 2026-04-28 — Added a Cory Doctorow citation on enshittification at the end of Movement III, framing the slop tipping point as enshittification compressed into a single workflow.
  • v1.0 — 2026-04-28 — First publish.

Further reading

⚠️ Please take a look at the LLM Disclaimer