- Emacs (inherited from lisp machines?). A VM which is powered by lisp. The latter make it easy to redefine function, and commands are just annotated functions. As for output, we have the buffer, which can be displayed in windows, which are arranged in a tiling manner in a frame. And you can have several frames. As the buffer in a window as the same grid like basis as the terminal emulator, we can use cli as is, including like a terminal emulator (vterm, eat, ansi-term,...). You can eschew the terminal flow and use the REPL flow instead (shell-mode, eshell,...). There's support for graphics, but not a full 2d context.
- Acme: Kinda similar to emacs, but the whole thing is mostly about interactive text. Meaning any text can be a command. We also have the tiling/and stacking windows things that displays those texts.
I would add Smalltalk to that, but it's more of an IDE than a full computing environment. But to extend it to the latter would still be a lower effort than what is described in the article.
Maybe it is an API. Maybe the kernel implements this API and it can be called locally or remotely. Maybe someone invents an OAuth translation layer to UIDs. The API allows syscalls or process invocation. Output is returned in response payload (ofc we have a stream shape too).
Maybe in the future your “terminal” is an app that wraps this API, authenticates you to the server with OAuth, and can take whatever shape pleases you- REPL, TUI, browser-ish, DOOM- like (shoot the enemy corresponding to the syscall you want to make), whatever floats your boat.
Heresy warning. Maybe the inputs and outputs don’t look anything like CLI or stdio text. Maybe we move on from 1000-different DSLs (each CLI’s unique input parameters and output formats) and make inputs and outputs object shaped. Maybe we make the available set of objects, methods and schemas discoverable in the terminal API.
Terminals aren’t a thing of the 80s; they’re a thing of the early 70s when somebody came up with a clever hack to take a mostly dumb device with a CRT and keyboard and hook it to a serial port on a mainframe.
Nowadays we don’t need that at all; old-timers like me like it because it’s familiar but it’s all legacy invented for a world that is no longer relevant. Even boot environments can do better than terminals today.
- https://arcan-fe.com/ which introduces a new protocol for TUI applications, which leads to better interactions across the different layers (hard to describe! but the website has nice videos and explanations of what is made possible)
- Shelter, a shell with reproducible operations and git-like branches of the filesystem https://patrick.sirref.org/shelter/index.xml
The last thing a command-line terminal needs is a Jupyter Notebook-like UI. It doesn't need to render HTML; it doesn't need rerun and undo/redo; and it definitely doesn't need structured RPC. Many of the mentioned features are already supported by various tooling, yet the author dismisses them because... bugs?
Yes, terminal emulators and shells have a lot of historical baggage that we may consider weird or clunky by today's standards. But many design decisions made 40 years ago are directly related to why some software has stood the test of time, and why we still use it today.
"Modernizing" this usually comes with very high maintenance or compatibility costs. So, let's say you want structured data exchange between programs ala PowerShell, Nushell, etc. Great, now you just need to build and maintain shims for every tool in existence, force your users to use your own custom tools that support these features, and ensure that everything interoperates smoothly. So now instead of creating an open standard that everyone can build within and around of, you've built a closed ecosystem that has to be maintained centrally. And yet the "archaic" unstructured data approach is what allows me to write a script with tools written decades ago interoperating seamlessly with tools written today, without either tool needing to directly support the other, or the shell and terminal needing to be aware of this. It all just works.
I'm not saying that this ecosystem couldn't be improved. But it needs broad community discussion, planning, and support, and not a brain dump from someone who feels inspired by Jupyter Notebooks.
Maintaining a high level of backwards compatibility while improving the user experience is critical. Or at least to me. For example, my #1 fristration with neovim, is the change to ! not just swapping the alt screen back to the default and letting me see and run what I was doing outside of it.
We generally like the terminal because, unlike GUIs it's super easy to turn a workflow into a script, a manual process into an automated process. Everything is reproducible, and everything is ripgrep-able. It's all right there at your fingertips.
I fell in love with computers twice, once when I got my first one, and again when I learned to use the terminal.
There's even more under the "Updates archive" expando in that post.
It was a pretty compelling prototype. But after I played with Polyglot Notebooks[1], I pretty much just abandoned that experiment. There's a _lot_ of UI that needs to be written to build a notebook-like experience. But the Polyglot notebooks took care of that by just converting the commandline backend to a jupyter kernel.
I've been writing more and more script-like experiments in those ever since. Just seems so much more natural to have a big-ol doc full of notes, that just so happens to also have play buttons to Do The Thing.
[1]: https://marketplace.visualstudio.com/items?itemName=ms-dotne...
But just showing a browser like Jupyter would be very useful. It can handle a wide variety of media, can easily show JS heavy webpages unlike curl, and with text option to show text based result like w3m but can handle JS, it will be more useful.
browser google.com/maps # show google map and use interactively
browser google.com/search?q=cat&udm=2 # show google image result
browser --text jsheavy.com | grep -C 10 keyword # show content around keyword but can handle JS
vim =(browser --text news.ycombinator.com/item?id=45890186) # show Hacker News article and can edit text result directly)Why? Well one reason is escape sequences are really limited and messy. This would enable everyone to gradually and backward-compatibly transition to a more modern alternative. Once you have a JSON-RPC channel, the two ends can use it to negotiate what specific features they support. It would be leveraging patterns already popular with LSP, MCP, etc. And it would be mostly in userspace, only a small kernel enhancement would be required (the kernel doesn’t have to actually understand these JSON-RPC messages just offer a side channel to convey them).
I suppose you could do it without any kernel change if you just put a Unix domain socket in an environment variable: but that would be more fragile, some process will end up with your pty but missing the environment variable or vice versa
Actually I’d add this out-of-band JSON-RPC feature to pipes too, so if I run “foo | bar”, foo and bar can potentially engage in content/feature negotiation with each other
With lisp REPLs one types in the IDE/editor having full highlighting, completions and code intelligence. Then code is sent to REPL process for evaluation. For example Clojure has great REPL tooling.
A variation of REPL is the REBL (Read-Eval-Browse Loop) concept, where instead of the output being simply printed as text, it is treated as values that can be visualized and browsed using graphical viewers.
Existing editors can already cover the runbooks use case pretty well. Those can be just markdown files with key bindings to send code blocks to shell process for evaluation. It works great with instructions in markdown READMEs.
The main missing feature editor-centric command like workflow I can imagine is the history search. It could be interesting to see if it would be enough to add shell history as a completion source. Or perhaps have shell LSP server to provide history and other completions that could work across editors?
My biggest gripe with it is that it quickly ends up becoming an actual production workload, and it is not simple to “deploy” and “run” it in an ops way.
Lots of local/project specific stuff like hardcoded machine paths from developers or implicit environments.
Yes, I know it can be done right, but it makes it sooooooooo easy to do it wrong.
I think I can’t not see it as some scratchpad for ad-hoc stuff.
Independent of the rest, I would love for more terminal emulators to support OSC 133.
Its flexibility is beyond imagination. Programs can emit anything from simple numbers/vectors/matrices to medias (image, sound, video, either loaded or generated) to interactive programs, all of which can be embedded into the notebook. You can also manipulate every input and output code blocks programmatically, because it's Lisp, and can even programmatically generate notebooks. It can also do typesetting and generate presentation/PDF/HTML from notebooks.
What people have been doing w/ Markdown and Jupyter in recent years has been available in Mathematica since (at least) 1-2 decades ago. FOSS solutions still fall short, because they rely on static languages (relative to Lisp, of course).
I mean, really, it's a technological marble. It's just that it's barred behind an high price tag and limited to low core counts.
Some lesson must surely be drawn from this about incremental adoption.
Missing out on inline images and megabytes of true-color CSI codes is a feature, not a bug, when bandwidth is limited.
If you want jupyter, we have jupyter. If you want HTML, we have several browsers. If you want something else, make it, but please don’t use vt220 codes and call it a terminal.
The article is just wish-listing more NIH barbarism to break things with. RedHat would hire this guy in a heartbeat.
When using tools that can emit 0 to millions of lines of output, performance seems like table-stakes for a professional tool.
I'm happy to see people experiment with the form, but to be fit for purpose I suspect the features a shell or terminal can support should work backwards from benchmarks and human testing to understand how much headroom they have on the kind of hardware they'd like to support and which features fit inside it.
Rid us of the text-only terminal baggage that we deal with today. Even graphics are encoded as text, sent to the terminal, then decoded and dealt with.
Plan9 had the terminal right. It wasn't really a terminal, it was just a window which had a text prompt by default. It could run (and display!) graphical applications just as easily as textual applications.
If you want a terminal of the future, stop embracing terminals of the past.
It ticks some of the boxes, but tonnes of work would be needed to turn it into a full alternative.
Any solution has to address this use case first, IMO. There are some design constraints here, like:
- I don't care about video game levels of graphics - I generally want things to feel local, as opposed to say some cloud GUI - byte stream model: probably bad? But how would I do better?
as just a few examples I thought of in 10 seconds; there's probably way more.
I've thought about the author's exact complaints for months, as an avid tmux/neovim user, but the ability to interact with system primitives on a machine that I own and understand is important.
But hey, those statements are design constraints too - modern machines are tied somewhat to unix, but not really. Sysadmin stuff? Got standardized into things like systemd, so maybe it's a bit easier.
So it's not just a cynical mess of "everything is shit, so let's stick to terminals!" but I'd like to see more of actually considering the underlying systems you are operating on, fundamentally, rather than immidiately jumping to sort of, "how do we design the best terminal" (effectively UI)? The actual workflow of being a systems plumber happens to be aided very well by tmux and vim :)
(And to be fair, I only make this critique because I had this vague feeling for a while about this design space, but couldn't formalize it until I read this article).
https://commons.wikimedia.org/wiki/File:DEC_VT100_terminal.j...
I may disappoint you with the fact that IBM PC-compatible computers have replaced devices of that class. We can only observe certain terminal emulators in some operating systems. There have been many attempts to expand the functionality of these emulators. However, most features beyond the capabilities of VT100 have not caught on (except UTF-8 support). I do not believe that anything will change in the foreseeable future.