This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].
> Extensibility powers technology innovation
>> While this flexibility could cause problems for the software ecosystem...
"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
> How mature is the software ecosystem?
10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.
The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.
I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.
[1]: https://www.reuters.com/business/anthropic-weighs-building-i...
Unity, Bazaar, Mir, Upstart, Snap, etc.
All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.
Beyond the potential platform fragmentation due to the variability of the ISA (a very unfortunate design choice IMO), mentioned elsewhere in this thread, what I find most frustrating is the boot process / equivalent of BIOS in that world.
My impression: complete lack of standardization, a ton of ad-hoc tools native to each vendor, a complete mess, especially when it comes to get the board to boot from devices the vendor didn't target (eg SSDs).
Until two things happen:
1. a CPU with a somewhat competitive compute power appears (so far, all the SBC's I've tried are way behind ARM and x86)
2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)
the whole RISC-V thing will remain a tiny niche thing, especially because when a vendor loses interest in the platform, all of the SW that is native to the platform goes to rot immediately (not that it was particularly good quality in the first place).
Your submission was sent successfully! Close
Thank you for contacting us. A member of our team will be in touch shortly. Close
You have successfully unsubscribed! Close
Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu
and upcoming events where you can meet our team. Close
Your preferences have been successfully updated. Close notification
Please try again or file a bug report. Close