This sounded really interesting... till I read this:
> It’s an AI-native operating system. Artificial neural networks are built in and run locally. The OS understands what applications can do, what they expose, and how they fit together. It can integrate features automatically, without extra code. AI is used to extend your ability, help you understand the system and be your creative aid.
(From https://radiant.computer/system/os/)
That's... kind of a wierd thing to have? Other than that, it actually looks nice.
I don't see that in this project. This isn't defined by a clean slate. It is defined by properties that it does not want to be.
Off the top of my head I can think of a bunch of hardware architectures that would require all-new software. There would be amazing opportunities for discovery writing software for these things. The core principles of the software for such a machine could be based upon a solid philosophical consideration of what a computer should be. Not just "One that doesn't have social media" but what are truly the needs of the user. This is not a simple problem. If it should facilitate but also protect, when should it say no?
If software can run other software, should there be an independent notion of how that software should be facilitated?
What should happen when the user directs two pieces of software to perform contradictory things? What gets facilitated, what gets disallowed.
I'd love to see some truly radical designs. Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash, machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
A n=32 system would have four billion cores and 4 terabytes if RAM and nearly enough persistent storage but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
You could probably start with a much lower n. Then consider how to write software for it that meets the principles that meets the criteria of how it should behave.
Different, clean slate, not easy.
I'm interested to hear about the plans or capabilities in R' or Radiance for things like concurrent programming, asynchronous/scheduling, futures, and invisible or implied networking.
AI is here and will be a big part of future personal computing. I wonder what type of open source accelerator for neural networks is available as a starting point. Or if such a thing exists.
One of the opportunities for AI is in compression codecs that could provide for very low latency low bandwidth standards for communication and media browsing.
For users, the expectation will shortly be that you can talk to your computer verbally or send it natural language requests to accomplish tasks. It is very interesting to think how this could be integrated into the OS for example as a metadata or interface standard. Something like a very lightweight version of MCP or just a convention for an SDK filename (since software is distributed as source) could allow for agents to be able to use any installed software by default. Built in embeddings or vector index could also be very useful, maybe to filter relevant SDKs for example.
If content centric data is an assumption and so is AI, maybe we can ditch Google and ChatGPT and create a distributed hash embedding table or something for finding or querying content.
It's really fun to dream about idealized or future computers. Congratulations for getting so far into the details of a real system.
One of my more fantasy style ideas for a desktop uses a curved continuous touch screen. The keyboard/touchpad area is a pair of ergonomic concave curves that meet in the middle and level out to horizontal workspaces on the sides. The surface has a SOTA haptic feedback mechanism.
Also the website is very low contrast (brighten up that background gray a bit!)
Somehow this makes me immediately not care about the project; I expect it to be incomplete vibe-coded filler somehow.
Odd what a strong reaction it invokes already. Like: if the author couldn’t be bothered to write this, why waste time reading it? Not sure I support that, but that’s the feeling.
It’s every engineer’s dream - to reinvent the entire stack, and fix society while they’re at it (a world without social media, sign me up!).
Love the retro future vibes, complete with Robert Tinney-like artwork! (He did the famous Byte Magazine covers in the late 70s and early 80s).
https://tinney.net/article-this-1981-computer-magazine-cover...
The image on this page is wild: https://radiant.computer/principles/
Of course, I am intrigued by open architecture. Will they be able to solve graphic card issues though?
Can I use it to chat on Discord with other people I know who use Discord? If not, then I'm gonna need a computer for that, in addition to this thing. I do not like Discord, but nonetheless it has a network effect and it's what a lot of people I want or need to communicate with have standardized on, and until I can compel all of them to stop using it, I need to be able to run the software somehow.
This sounds a lot like a Smalltalk running as the OS until they started talking about implementing a systems language.
If anything is already working, where’s the code? Can people contribute yet?
Not trying to nitpick, but it’s hard to tell what’s real vs. vaporware (beyond the author’s very impressive abilities for systems/language design and writing)
The website also mentions a device but I get that’s many years away too, right? I mean how long will it take to actually develop everything that’s described in the website?
> Hardware and software must be designed as one
In here, they describe an issue with computers is how they use layers of abstraction, and that actually hides complexity. But...
> Computers should feel like magic
I'm not sure how the authors think "magic" happens, but it's not through simplicity. Early computers were quite simple, but I can guarantee most modern users would not think they were magical to use. Of course, this also conflicts with the idea that...
> Systems must be tractable
Why would a user need to know how every aspect of a computer works if they're "magic" and "just work"?
Anyway, I'm really trying not to be cynical here. This just feels like a list written by someone who doesn't really understand how computers or software came to work the way they do.
It's not that the system doesn't come with a browser. It's that the browser is apparently built into the operating system. Remember IE 6?
If you're going to rethink computing devices, the next thing is probably a big screen, a camera, and a good microphone array. No keyboard. You just talk to it and occasionally gesture, and it organizes and helps you.
The task that has been set is gigantic. Despite that, they've decided to make it even harder by designing a new programming language on top of it (this seems to be all the work done to date).
The hardware challenge alone is quite difficult. I don't know why that isn't the focus at this stage. It is as-if the author is suggesting that only the software is a problem, when some of the biggest issues are actually closed hardware. Sure, Linux is not ideal, but its hardly relavent in comparison.
I think this project suffers from doing too much abstract thinking without studying existing concrete realities.
I would suggest tackling one small piece of the problem in the hardware space, or building directly on some of the work others have done.
I don't disagree with the thesis of the project, but I think it's a MUCH bigger project than the author suggests and would/will require a concentrated effort from many groups of people working on many sub-projects.
An admirable goal. However putting that next to a bunch of AI slop artwork and this statement...
> One of our goals is to explore how an A.I.-native computer system can enhance the creative process, all while keeping data private.
...is comically out of touch.
The intersection between "I want simple and understandable computing systems" and "I want AI" is basically zero. (Yes, I'm sure some of you exist, my point is that you're combining a slim segment of users who want this approach to tech with another slim segment of users who want AI.)
I just kind of want to see what comes out of this.
RISC-V ftw and if they got lightweight, local-first AIs with a decent site-map of each program; that could be really unique and fun; if not annoying to use in practice.
I see the future they want so badly.
What an airball. Social networks are the single most valuable aspect of computers and the internet. It is a dark world where we all just talk to LLMs instead of each other.
I wonder why the Unix standard doesn't start dropping old syscalls and standards? Does it have to be strictly backwards compatible?
But why? We use virtual address spaces for a reason.
Why does it always need to be so difficult? We already have the tools. Our methods, constantly changing and translblahbicatin' unto the falnords, snk snk... this kind of contrafabulation needs to cease.
Just sayin'.
IPFS+Lua. It's all we really need.
Yes yes, new languages are best languages, no no, we don't need it to be amazing, just great.
It'll be great.
[1]: https://github.com/cloudhead [2]: https://cloudhead.io/
In short: cool story, bro. Now show me the money.
The Prime Radiant featured in Foundation.
This just reads like complete BS vaporware.
Why should any of us take it seriously?
I wish I could work there!
I even tried to search it on distrowatch with the negate option in their search but it seemed to be broken.
I needed it once to build my own "studyOs" , and in the process I went down a deep rabbit hole on about the hobby-ist distros of linux and their importance.
I then settled on MXLinux because of what I wrote below
People recommend cubic etc. but personally I recommend MXLinux. Its default linux snapshot feature was exactly what I was looking for and it just worked.
I glanced over this and I was excited thinking oh great this could be a linux iso with no browser and similar to tiny core but I found out through the comments that its focus on LLM's etc. is very vague and weird for what I am reading.
I just feel like its seriously not getting the idea. I want to effectively dissect this post's tenants from a Linux user for just a few years.
My first experiences was positive, then negative and now its mixed really.
I feel like this is intending on become so hardware focused that I am not even sure what they mean by this. From my limited knowledge, Linux tries to do a lot of things simply to boot up into a predictable environment on every computer device most likely to the point that there are now things like nix that can arise your system in a determinist system.
I still think that there is a point in making something completely new instead of Yet another Unix from what I can tell, but my hopes aren't very high, sorry. You would have to convince me from why the world would be better off with this instead of Linux aside and their notes on why not linux is still absolutely mixed thoughts in my opinion
> Linux is a monolithic kernel. All drivers, filesystems, and core services share the same privilege space. A single bug, eg. a bad pointer dereference in a GPU driver can corrupt kernel memory and take down the entire system.
Can't drivers be loaded at runtime and there are ways to isolate the taking down of entire system imo. I think this is just how a monolithic kernel should work, no?
I read more discussions on mono-lithic kernel and micro-kernel on Tanenbaum–Torvalds debate wiki [1] and here is something that I think to be apt here
> Torvalds, aware of GNU's efforts to create a kernel, stated "If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't."
Some other person on the usenet group also called gnu hurd a vaporware and well I think there is some factuality to it and gnu hurd team was working on hurd far longer than linux was working at linus (an excerpt? from the same wikipage)
Another line I want to share is this from the wiki: Different design goals get you different designs
I think I was going to criticize the radiant computer but hey, its open source,nobody's stopping you from doing work on it. And this line was said defending linux earlier, so it sure can defend this
But at the same time, my concerns regarding this or any project is regarding it becoming vapor-ware. Linux is way too big and actually good enough for most users. I don't think that the world can have a perfect os. It can have a good enough tho and from the end user, Linux is exactly that. The fact that its open source and is genuinely good at what it does, and there is absolutely no denying about it. You could live your whole life using linux Imo. Its beautiful.
I used to defend NetBsd etc. or hate systemd etc. but the truth of the matter is that nobody's forcing you to use systemd or netbsd, you damn well could use a server without it but I have found that the mass adoption does make me convince that a sys-admin level, linux, maybe even debian or systemd in general would have its gains.
I think linux is really really really good, its just the best imo but I will still try out things like the freebsd,openbsd etc. . I genuinely love it so much. Its honestly wild / even a fever dream when you think about it that something like linux even actually exists. Its so brilliant and the ecosystem around it is just chef's kiss.
One can try and these are your developer hours but I just don't want to see things turn into vaporware, so I will just ask you a question on how do you prevent this project from becoming a vaporware. I am sure this isn't the first time someone has proposed the ideal system and it wouldn't be the last either.
Edits: Sorry this got long. I got a little carried reading the wiki article, its so good.
[1]: https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...