Both OpenClaw and MSDOS gaining a lot a traction by taking short cuts, ignoring decades of lessons learned and delivering now what might have been ready next year. MSDOS (or the QDOS predecessor) was meant to run on "cheap" microcomputer hardware and appeal to tinkerers. OpenClaw is supposed to appeal to YOLO / FOMO sentiments.
And of course, neither will be able to evolve to their eventual real-world context. But for some time (much longer than intended), that's where it will be.
Memory isolation is enforced by the MMU. This is not software.
Maybe you were confused with Linux, which came later, and landed in a soft x32 bed with CPU rings and Page Tables/VirtualMemory. ("Protected Mode", named for that reason...)
That being said, OpenClaw is criminally bad, but as such, fits well in our current AI/LLM ecosystem.
Sad? That was the best part of DOS. Some of my fondest computing memories was on DOS - warts and all. Being able to live hexedit your drive and memory to cheat in games, the sheer freedom you got from hacking around the OS is incompatible to modern operating systems - even Linux.
You were in full control of your computer and you decided how it could be used, not some mega corporation or compliance agency.
DOS wasn't sad, it was fun.
Problem is, I was just learning and the mac was running System 7. Which, like MS-DOS, lacked memory protection.
So, one backwards test at the end of your loop and you could -- quite easily -- just overwrite system memory with whatever bytes you like.
I must have hard-locked that computer half a dozen times. Power cycle. Wait for it to slowly reboot off the external 20MB SCSI HDD.
Eventually I took to just printing out the code and tracing through it instead of bothering to run it. Once I could get through the code without any obvious mistakes I'd hazard a "real" execution.
To this day, automatic memory management still feels a little luxurious.
But my main takeaway is that from the security standpoint this is a ticking bomb. Even under Docker, for these things to be useful there is no going around giving it credentials and permissions that are stored in your computer where they can be accessed by the agent. So, for the time being, I see Telegram, my computer, the LLM router (OpenRouter) and the LLM server as potential attack/exfiltration surfaces. Add to that uncontrolled skills/agents from unknown origins. And to top it off, don't forget that the agent itself can malfunction and, say, remove all your email inboxes by mistake.
Fascinating technology but lacking maturity. One can clearly see why OpenAI hired Clawdbot's creator. The company that manages to build an enterprise-ready platform around this wins the game.
Using my Mac or Windows PC, it's very rare that I actually want an app to access files on its own that it didn't create. Like if I write a doc in Word, I'm only going to edit it in Word. I might want to email a copy to someone, but that doesn't mean Mail needs RW access to the original. I might copy a video clip into editing software, but again it doesn't need to touch the original. Programs often need their own dirs for caches, settings, etc, and those don't even need to be read by other programs. It's also annoying how they can write anywhere in ~/ and end up scattering stuff in random places. The iPhone sandboxing system works great for all that, where apps have to explicitly share to others. The Mac file access rules tried to address this but still seem like Swiss cheese while also getting in the way of normal usage, and there's seemingly nothing in Windows or Linux (unless you're going out of your way with jails).
Other APIs besides file are a bigger challenge. That was gated away from the start on iPhones but not on desktop OSes. If they don't find a solution, web and mobile apps are going to keep taking over.
It's like your actual asssitant. Now, most of this can be done inside ChatGPT/Claude/Codex now. Their only remaining problem for certain agentic things is being able to run those remotely. You can set up Telegram with Claude Code but it's somehow even more complicated than OpenClaw.
I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.
When people vibe-code, usually the goal is to do something.
When I hear people using OpenClaw, usually the goal seems to be… using OpenClaw. At a cost of a Mac Mini, safety (deleting emails or so), and security (litelmm attack).
I have similar concerns and somehow think openclaw will not be remembered in 30 years time so the comparison does not land for me
And if you remove either access to data or access to internet then you kill a good chunk of usefulness
I remember Apple introducing sandboxing for Mac apps, extending deadlines because no one was implementing it. AFAIK, many apps still don’t release apps there simply because of how limiting it is.
Ironically, the author suggests to install his software by curl’ing it and piping it straight into sh.
I'm reading that Swedish IT consultant's rant in the voice of Swedish Guy.
So yeah, perhaps it isn't fooling the author, but it doesn't matter for the other billions of people.
By the 90s, when I was working on DOS/Windows software, I was inundated with resumes from engineers who had been laid off from those very same companies. And it wasn't until Windows XP in 2000 when most people moved away from DOS.
I feel like every twenty years or so, kids look at the current computing landscape and say, "That's too complicated! I'm not going to learn all that--I'm just going to invent my own thing." DOS was that in the 80s, the Web in 2000. Maybe OpenClaw is that for the 2020s. AI is certainly going to reinvent everything.
The DEC people were right about DOS: it was barely more than a toy. But we didn't abandon it and go back to the safety of VMS. Instead, we improved it until it had all the security and capabilities we needed.
I don't know if OpenClaw is going to win or not, but if you remember MS-DOS, you wouldn't count it out.
I too remember DOS. Data and code finely blended and perfectly mixed in the same universally accessible block of memory. Oh, wait… single context. nwm
I'm writing a Blender plugin. I don't know what I did to offend MacOS but somewhere I changed something or touched a file and now MacOS refuses to open Blender until I go through an arcane ritual of telling the OS it's actually ok to use.
I don't like the lack of control in name of security or that you're having to be come ever-more an expert to actually use things you own. Security needs to be done carefully and intentionally not just blasted everywhere. What is being called security is very much more often control by the creators.
Well, that may be the first time it wasn't a mainframe. Plus this is for the lay-away project of the mid-'80's when IBM-compatible PC's already existed.
What most people don't realize is that before the IBM PC, Wal-Mart was way ahead of all other retailers with its POS, inventory, and logistics systems from all over the US tied into headquarters in Bentonville, Arkansas.
Using telephone modems of course like anybody else, but this is before they had any "superstores", and were still quite small, nor were they in any big cities. Yet.
But they were ready. Actually that was all Sam had ever been planning for since the beginning, but they had become quite successful enough already as the fastest growing chain of mainly rural "country stores".
A typical Wal-Mart was dwarfed by a K-Mart store, and a full-size Sears seemed like a super store by comparison. They had mainframes and POS too but Wal-Mart just took it to the next level. Ran circles around them digitally.
Really did scorch some earth with those kind of advantages when they came to the big cities, but after the PC came out I would say they regressed more toward the mean. The momentum still overwhelmed though.
*Claw is more like windows 98. Everyone knows it is broken, nobody really cares. And you are almost certainly going to be cryptolocked (or worse) because of it. It isn't a matter of if, but when.
Packages shipping as part of Linux distros are signed. Official Emacs packages (but not installed by the default Emacs install) are all signed too.
I thankfully see some projects released, outside of distros, that are signed by the author's private key. Some of these keys I have saved (and archived) since years.
I've got my own OCI containers automatically verifying signed hashes from known author's past public keys (i.e. I don't necessarily blindly trust a brand new signature key as I trust one I know the author has been using since 10 years).
Adding SHA hashes pinning to "curl into bash" is a first step but it's not sufficient.
Software shipped properly aren't just pinning hashes into shell scripts that are then served from pwned Vercel sites. Because the attacker can "pin" anything he wants on a pwned JavaScript site.
Proper software releases are signed. And they're not "signed" by the 'S' in HTTPS as in "That Vercel-compromised HTTPS site is safe because there's an 'S' in HTTPS".
Is it hard to understand that signing a hash (that you can then PIN) with a private key that's on an airgapped computer is harder to hack than an online server?
We see major hacks nearly daily know. The cluestick is hammering your head, constantly.
When shall the clue eventually hit the curl-basher?
Oh wait, I know, I know: "It's not convenient" and "Buuuuut HTTPS is just as safe as a 10 years old private key that has never left an airgapped computer".
Here, a fucking cluestick for the leftpad'ers:
https://wiki.debian.org/Keysigning
(btw Debian signs the hash of testing release with GPG keys that haven't changed in years and, yes, I do religiously verify them)