I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.
I could not tell the real from fake music at all.
I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
(minor spoiler)
The text accompanying an image of a painting:
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation. Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)
If veracity matters, use authorative sources. Nothing has really changed about the skills needed for media literacy.
We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.
Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.
Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.
Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."
>This was actually AI-generated slop! Repeats 'water is wet' multiple times.
I didn't know writing "water is wet" repeatedly was enough to de-humanize you.
>In many situations, it could be argued that grass may sometimes appear to have a greenish quality, though this might not always be the case.
>This was actually AI-generated slop! Won't commit to 'grass is green' and uses uncertain words.
What? Not all grass is green.
Fun times ahead.
> Bees collect pollen from flowers and make honey. They also drive tiny cars to get from flower to flower!
The explanation given is that it’s not factually correct, therefore it’s AI slop. Maybe I didn’t pay enough attention to the instructions, but aren’t humans also capable of creating text that is not factually correct, and at times is done so not out of ignorance for for artistic or humorous purposes? This example here sounds like something that would be written by a child with an active imagination, and not likely the kind of “seems plausible but is actually false” slop that LLMs come up with.
> Correct! Well done, detective!
> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.
> Albert Pinkham Ryder, Seacoast in Moonlight (1890, the Phillips Collection, Washington)
The image is not photography, I guess technically it's a photograph of a painting but still, confusing text.
This sounds to me like a message is "poor fakes are generated, and everything else is genuine", which I think would be a very counterproductive message, even now.
I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.
Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:
1. Picking on one member of the "slop syndicate" at a time.
2. Show some examples (evidence) before beginning the evaluation.
First of all, there are only 27 "slop" image examples, but 200 real ones - very bad ratio. And almost all real examples are just dated photographs, paintings, photos of old books - there are genuinely 0 (not joking) modern photos or digital artwork. Also multiple "slop" image examples were actual screenshots of ChatGPT interface or clearly cropped screenshots.
Text is even worse - they somehow present it as if LLMs cannot write factually correct or simple text.
I genuinely believe that they should take this down immediately and do a major rework, because at this stage it will only do harm. It might teach the children or adults who complete this that AI can never write factually correct text or create very realistic-looking photos (good luck with with Nano Banana Pro).
P.S. To see how bad it is, just scrape https://slopdetective.kagi.com/data/images/not_slop/{file} from image_001.webp to 200 and slop/image_001.webp to 027.
Also see https://slopdetective.kagi.com/data/text/slop/l3_lines.json and https://slopdetective.kagi.com/data/text/not_slop/l3_lines.j... for real vs LLM-written text.
>Fake stuff made by computers that tries to look like it was made by real people. It's everywhere online!
Tricking people is not what makes it slop. Being low quality is what makes it slop. This is a dangerous definition as it could mean that anything AI generated could be considered slop, even if it was higher quality than regular things.
https://arxiv.org/abs/2510.15061
Also somewhat tangentially relevant video: https://www.youtube.com/watch?v=Tsp2bC0Db8o
https://www.astralcodexten.com/p/ai-art-turing-test
Though maybe these are not examples of "slop" but instead good use of AI?
hey kids, learn about ai slop by reading this guide to ai slop written by ai and full of ai slop mistakes. sheesh
I think you gotta start with a definition of what AI slop is and why it matters. Most of what LLMs generate is not obviously incorrect.