AI slop is rampant on social media right now. It has become the easy way to grow accounts and gain followers. It takes less than a minute to ask an LLM to write a social media post about something interesting and then post it online. It would be easy to use a $20 per month plan from a major provider to get more accurate output with fewer (though not zero) hallucinations, but the accounts I see seem to be using cheap models that make a lot of mistakes and hallucinate facts.
I have a theory that the hallucinations add extra spice to the posts, making them feel more interesting and therefore more likely to be shared.
It's a difficult time for social media users who haven't yet caught on to what AI spam looks like and why it can't be trusted.