What Is AI Slop? The Internet's Favorite New Insult, Explained
You know it when you see it. A Facebook post of a weirdly smooth Jesus made of shrimp, shared earnestly by thousands. A Google result for “best running shoes” that reads like a robot summarizing other robot summaries. A coworker’s Slack message that’s three paragraphs of polished nothing.
There’s finally a word for it: AI slop. And it’s become so ubiquitous that Merriam-Webster named it the word of the year for 2025.
Where the Term Came From
“Slop” has meant “soft mud” since the 1700s. By the 1800s it meant food waste — pig slop — and then more generally, rubbish. It was a poet and technologist writing under the name “deepfates” who first applied the word to AI-generated content on X in early 2024.
The term went mainstream in May 2024 when developer Simon Willison championed it on his blog: “If it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term for it.”
The analogy to spam was deliberate. Just like we needed a word for unwanted email in the ’90s, we needed one for the AI equivalent. “Slop” stuck because it’s fun to say and everyone immediately gets it.
What Counts as AI Slop
Merriam-Webster’s official definition: “digital content of low quality that is produced usually in quantity by means of artificial intelligence.”
An academic paper published in January 2026 identified three defining properties:
- Superficial competence — it looks polished enough to pass at a glance
- Asymmetric effort — it takes seconds to create but minutes (or hours) to evaluate
- Mass producibility — one person can generate thousands of pieces
The key word in that definition is “low quality.” Not all AI-generated content is slop — far from it. Millions of people use AI every day to do genuinely useful work: drafting emails, summarizing research, writing code, organizing information. The difference is whether someone actually put thought into the output or just hit “generate” and walked away.
Where You’ll Find It
Search results
AI-generated content accounted for 19% of Google search results as of early 2025, up from 7% the year before. Some estimates project that 90% of online content could be AI-generated by the end of 2026. (Not all slop, to be clear — but a lot of it.)
The most visible example: recipe blogs. Fortune reported that AI-generated recipes have flooded search results, often rewriting the same dish dozens of times with slight variations. Nobody actually cooked these.
Social media
Of the top 20 most-viewed Facebook posts in the U.S. last fall, four were obviously AI-generated. AI images of the Pope in a puffer jacket, soldiers made of pasta, and “Jesus shrimp” went viral, shared by millions who thought they were real. Some of these are harmless fun. Others… less so.
The workplace
Harvard Business Review published research showing that 41% of employees have encountered AI-generated “workslop” — content that looks polished but doesn’t actually say anything useful. Each instance costs nearly two hours of rework. For a 10,000-person company, that adds up to over $9 million a year.
The awkward part: about half of those surveyed viewed the senders as less creative and reliable. So if you’re pasting raw ChatGPT output into Slack, your coworkers have probably noticed.
Software and security
In January 2026, the open-source project cURL shut down its bug bounty program after being flooded with fake, AI-generated vulnerability reports — plausible-sounding security issues that didn’t actually exist, submitted by people hoping for bounty payouts.
The Real Problems With AI Slop
At the annoying end, slop means more scrolling to find what you’re actually looking for. But at scale, it creates some real problems:
Trust gets harder. When AI can generate a convincing product review in seconds, it gets harder to know which reviews reflect actual experience. That’s not great for anyone — consumers or the businesses with legitimately good products.
Creators get squeezed. Social media algorithms don’t distinguish between a photographer’s portfolio and an AI image farm. When the AI version is free and infinite, it crowds out people who make original work for a living.
Misinformation scales. AI-generated medical advice, legal information, and news articles can contain confident-sounding errors. AARP warned that this is particularly risky for people less familiar with how AI works.
That said — it’s worth noting that low-quality content existed long before AI. Content farms, clickbait, and SEO-stuffed articles have been clogging search results for over a decade. AI just made it faster and cheaper to produce.
How to Spot AI Slop
There’s no foolproof test, but researchers have identified some reliable patterns.
The vocabulary
Certain words spiked in frequency after 2023 in ways that almost certainly trace back to LLMs. The big ones: delve, tapestry, landscape (used abstractly), pivotal, testament, underscore, vibrant, intricate, foster, garner, and showcase. One or two in an article could be coincidence. A cluster of them is a strong signal.
The phrase “it’s worth noting” is so overused by AI that it’s basically a watermark at this point.
The structure
AI text loves a specific formula: bold inline headers followed by a colon, then a description. It overuses the rule of three (“adjective, adjective, and adjective”). It avoids simple “is” and “are” constructions in favor of fancier alternatives like “serves as” or “stands as.” And it leans heavily on em dashes and negative parallelisms (“It’s not just about X — it’s about Y”).
The puffery
LLMs have a habit of inflating the importance of everything they write about. A small-town railway station becomes “a pivotal hub in regional transportation infrastructure.” A minor historical figure “left an indelible mark.” Everything “underscores the enduring legacy” of something. Researchers describe this as regression to the mean — specific facts get smoothed into generic, impressive-sounding statements that could apply to almost anything.
The images
AI-generated images have their own tells: too-smooth skin, warped hands, melting backgrounds, text that’s almost-but-not-quite readable, and a hyper-polished quality where everything looks slightly too perfect. These are getting harder to spot as models improve, but the uncanny valley effect is still there if you look closely.
The workplace version
HBR’s research on workslop found some practical tells: messages that are longer than the topic warrants, reports that restate the question as the answer, and presentations that sound impressive but contain no actual decisions or recommendations. If a Slack message reads like a press release, someone probably didn’t write it themselves.
What’s Being Done
Platforms are responding
Pinterest added a slider that lets users dial down AI content in their feeds. YouTube CEO Neal Mohan said reducing slop is a top priority for 2026. Google has penalized sites that violate its scaled content abuse policies, with some AI content farms losing nearly all their organic traffic.
New tools are emerging
Kagi Search launched SlopStop, a community-driven system that flags and downranks AI-generated content. Platforms like Cara (for artists) and Pixelfed ban AI-generated work entirely. Browser extensions like Bye Bye AI Slop filter it from Google results.
The industry is debating the term itself
Microsoft CEO Satya Nadella has called for moving past the word “slop”, arguing that 2026 will be “a pivotal year” for AI quality. Fair or not, the term has clearly struck a nerve.
The Line Is Blurrier Than You Think
Here’s the uncomfortable truth: almost everyone is using AI for content now. Your favorite newsletter probably uses it for first drafts. That well-written product description you liked? Probably AI-assisted. This blog runs on AI tools too.
The question isn’t really “was AI involved?” anymore — it’s “did someone care about the result?”
Using AI to research, draft, and polish a thoughtful piece isn’t slop. Using AI to generate 500 articles overnight with no human review is.
A good rule of thumb: if you wouldn’t put your name on it without the AI, don’t put your name on it with the AI.
AI slop is real, it’s growing, and it’s genuinely making parts of the internet worse. But the term is also becoming a catch-all insult for anything AI-generated, which isn’t quite fair either. Plenty of AI-assisted content is useful, well-made, and worth your time.
Ready to automate your busywork?
Carly schedules, researches, and briefs you—so you can focus on what matters.
Get Carly Today →Or try our Free Group Scheduling Tool


