One is the centrally controlled 'large' AI models that become monitoring apparatuses of the state. I don't think there needs to be much discussion on why this is a bad idea.
This said, open (weight) models don't save us from problems either. It's not hard to imagine a small capable model that can boot strap itself into running on consumer hardware and stolen cloud resources being problematic on the net spreading its gremlin like behavior wherever it could. The big AI companies would gladly use AI behaviors like this to dictate why all models/hardware should be controlled and once the general population is annoyed enough, they will gladly let that happen.
Lastly, prompt injections are not a, at least completely, solvable problem. To put it another way, this is not a conventional software problem, it's a social engineering problem. We can make models smarter, but even smart humans fall for stupid things some of the time, and models don't learn as they go along so an attacker pretty much as unlimited retries to trick the model.
I think this is why the LLM revolution has been so existentially depressing for so many senior engineers - we’ve spent our entire careers fighting for exactly what the author suggests, and we couldn’t make progress against the product and management cabal when code took time and people to write. Now code is “free,” and we’re all being told to just get on the train, don’t worry about the bridge being out, we’ll build a new one when we get there, you see how fast we’re going now?
It looks like every generation has to learn this for themselves though.