Ages ago, I wrote that I was never a fan of the term "singularity" in the context of AI. When mathematical singularities pop up in physics, it's usually a sign the physics is missing something.
Instead, I like to think of the AI "event horizon", the point in the future, one that always seems like it's ahead of you no matter how close you get, and beyond which you can no longer predict what happens next.
Obviously that will depend on how much attention any given person is paying to the developments in the field (and I've seen software developers surprised by Google Translate having an AR mode a decade after the tech was first demonstrated), but there is an upper limit even for people who obsessively consume all public information about a topic: if you're reading about it when you go to sleep, will you be surprised when you wake up by all the overnight developments?
Well, your last edit to that repo was the day after I posted it was beyond current AI.
(And sure, I had all of Linux in mind, not just a POC, and you're directly open about it not even being fully tested, and the person I was replying to four days ago also said "that basically isn’t Linux since LLM are trained on said the source code" which I can't confirm or refute, but what I do see… is more than I was expecting the current SOTA to be coherent over).