Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The irony you identify is the exact thing I ran into building AI experiments daily: people who complain about AI slop are often producing it, because the output looks fine until you ask whether it reveals anything interesting.

AI slop isn't a model failure - it's a direction failure. The model executes competently on whatever you give it. If you give it vague instructions, you get fluent generic content. The saturation-breeds-skepticism dynamic you describe in commerce shows up in builder output too: undirected AI produces unit converters and word counters.

The model had the capability for something better; the direction just wasn't there. Wrote about this gap from the experiment side: https://thoughts.jock.pl/p/directed-ai-experiments-vibe-business

Jay White's avatar

Referring to your first illustration, "Google Queries for 'AI Slop,'" I am curious about what appears to be a roughly 3-month periodicity in the upward-trending data. The trend is understandable, but, to me at least, the seemingly periodic behavior is not.

3 more comments...

No posts

Ready for more?