You can run NeXTStep in your browser by clicking above link. A couple of weeks ago you could run Framemaker as well. I was blown away by what Framemaker of the late 1980s could do. Today's Microsoft Word can't hold a candle to Framemaker of the late 1980s!
Edit: Here's how you start FrameMaker:
In Finder go to NextDeveloper > Demos > FrameMaker.app
Then open demo document and browse the pages of the demo document. Prepare to be blown away. You could do that in 1989 with like 64 MB of RAM??
In the last 37 years the industry has gone backwards. Microsoft Word has been stagnant due to no competition for the last few decades.
For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!
Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?
I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.
It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.
Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?
This is a flip side of the bitter lesson. If all attention goes into the AI algorithm, and none goes into the specific one in front of you, the efficiency is abysmal and Wirth gets his revenge. At any scale larger than epsilon, whenever possible LLMs are better leveraged to generate not the answer but the code to generate it. The bitter lesson remains valid, but at a layer of remove.
Today the same argument is rehashed - it's outrageous that VS Code uses 1 GB of RAM, when Sublime Text works perfectly in a tiny 128 MB.
But notice that the tiny/optimized/good-behaviour of today, 128 MB, is 30 times larger than the outrageous decadent amount from Wirth's time.
If you told Wirth "hold my bear", my text-editor needs 128 MB he would just not comprehend such a concept, it would seem like you have no idea what numbers mean in programming.
I can't wait for the day when programmers 20 years from now will talk about the amazingly optimized editors of today - VS Code, which lived in a tiny 1 GB of RAM.
Really wish that there were drivers available to make that run nicely/natively on a Raspberry Pi rather than in an emulator:
http://pascal.hansotten.com/niklaus-wirth/project-oberon/obe...
> You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting, a few milliliters of water and enough power to watch 5% of a TikTok video on a television, it will tell you.
This might be what many of the companies that host and sell time with an LLM want you to do, however. Go ahead, drive that monster truck one mile to pickup fast food! The more that's consumed, the more money that goes in the pockets of those companies....
> The instincts are for people to get the AI to do work for them, not to learn from the AI how to do the work themselves.
Improviny my own learning is one of the few things I find beneficial with LLMs!
If the results are expected to be really good, people will wait a seriously long time.
That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow.
It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.
The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers.
Maybe one day that will change
The EU should do a radical social restructuring betting on no growth. Perhaps even banning all American tech. A modern Tokugawa.
Economically, something like memory is in the vicinity of 10 orders of magnitude cheaper today relative to 1970 (a). Similar things can be said about processors. This means the incentive to invest costly engineering resources (b) into optimizing software is very low. In terms of energy, a CPU instruction is at least millions of times more energy efficient today (c). That's another big economic disincentive. Furthermore, time spent optimizing is time not spent doing product development (d). A slower product on the market can be better than late market entry.
So we have production costs of hardware (a), production costs of software (as a function of time)(b), energy costs of hardware (c), energy cost of running software (c), and opportunity cost of late market entry (d). There's also the time cost of running software (e).
(a) is cheaper;
(b) depends on your measurement of utility;
(c) is cheaper;
(d) means unoptimized software tends to be cheaper;
(e) depends on your measurement of utility;
So (b) and (e) are where Wirthian arguments can focus.
However, AI may yet play a major role in optimizing software. They are already being used in this space.[0]
W.r.t. complexity, one consequence of abstraction is that it further decouples the cost of an operation from the difficulty of implementation. Of course, the two were never identical to begin with. It is easier to implement bubble sort than quick sort, easier still to come up with it when you have no knowledge of sorting algorithms. But greater abstraction is better at concealing computational complexity. The example involving ORMs is a good one. When you have to write SQL by hand, you have a clearer picture of the database operations that will be performed, because the correspondence between the operations and what the database is doing is tighter. ORMs, on the other hand, create an abstraction over SQL that is divorced from the database. Unless the ORM is written in some crafty way that can smartly optimize the generated SQL (and optimizers have their limitations), you can land yourself in exactly the situation the author describes.
W.r.t. learning from LLMs, that is perhaps the better application in many cases, as a kind of sophisticated search engine. The trouble is that people treat LLMs as infallible oracles. Another issue is that people seem not to care about becoming better themselves. You see this with thought experiments where we posit some AI that can do all the thinking and working for us. Many if not most people react as if this makes human beings "obsolete"...which is such a patently absurd and frankly horrifying and obscene notion that it can only be an indictment of our consumerist culture. Obsolete with respect to what? A human life is not defined by economic utility. Human purpose is not instrumental. Even if an AI understood philosophy, science, etc., if I don't understand them, then I don't understand them. I am no better for it when someone or some fictional AI does. I am made no wiser.
The root problem with 2FA is that the average computer is full of vulnerabilities and cannot be trusted 100% so you need a second device just in case the computer was hacked... But it's not particularly useful because if someone infected your computer with a virus, they can likely also infect your phone the next time you plug it in to your computer to charge it... It's not quite 2-factor... So much hassle for so little security benefit... Especially for the average person who is not a Fortune 500 CEO. Company CEOs have a severely distorted view about how often the average person is targeted by scammers and hackers. Last time someone tried to scam me was 10 years ago... The pain of having to pull up my phone every single day, multiple times per day to type in a code is NOT WORTH the tiny amount of security it adds in my case.
The case of security is particularly pernicious because complexity has an adverse impact on security; so trying to improve security by adding yet more complexity is extremely unwise... Eventually the user loses access to the software altogether. E.g. they forgot their password because they were forced to use some weird characters as part of their password or they downloaded a fake password manager which turned out to be a virus, or they downloaded a legitimate password manager like Lastpass which was hacked because obviously, they'd be a popular target for hackers... Even if everything goes perfectly and the user is so deeply conditioned that they don't mind using a password manager... Their computer may crash one day and they may lose access to all their passwords... Or the company may require them to change their password after 6 month and the password manager misses the update and doesn't know the new password and the user isn't 'approved' to use the 'forgot my password' feature... Or the user forgets their password manager's master password and when they try to recover it via their email, they realize that the password for their email account is inside the password manager... It's INFURIATING!!!
I could probably write the world's most annoying book just listing out all the cascading layers of issues that modern software suffers from. The chapter on security alone would be longer than the entire Lord of the Rings series... And the average reader would probably rather throw themselves into the fiery pits of Mordor than finish reading that chapter... Yet for some bizarre reason, they don't seem to mind EXPERIENCING these exact same cascading failures in their real day-to-day life.