- I don’t get the negativity.
The specs look impressive. It is always good to have competition.
They announced tapeout in October with planned dev boards next year. Vaporware is when things don’t appear, not when they are on their way (it takes some time for hardware).
It’s also strategically important for Europe to have its own supply. The current and last US administration have both threatened to limit supply of AI chips to European countries, and China would do the same (as they have shown with Nexperia).
And of course you need the software stack with it. They will have thought of that.
https://vsora.com/vsora-announces-tape-out-of-game-changing-...
- Impressive numbers on paper, but looking at their site, this feels dangerously close to vaporware.
The bottleneck for inference right now isn't just raw FLOPS or even memory bandwidth—it's the compiler stack. The graveyard of AI hardware startups is filled with chips that beat NVIDIA on specs but couldn't run a standard PyTorch graph without segfaulting or requiring six months of manual kernel tuning.
Until I see a dev board and a working graph compiler that accepts ONNX out of the box, this is just a very expensive CGI render.
- I can ensure you it's not vaporware at all. silicon is running in the fab, application boards have finished the design phase, software stack validated...
- Always good to see more competition in the inference chip space, especially from Europe. The specs look solid, but the real test will be how mature the software stack is and whether teams can get models running without a lot of friction. If they can make that part smooth, it could become a practical option for workloads that want local control.
by pclmulqdq
2 subcomments
- It needs a "buy a card" link and a lot more architectural details. Tenstorrent is selling chips that are pretty weak, but will beat these guys if they don't get serious about sharing.
Edit: It kind of looks like there's no silicon anywhere near production yet. Probably vaporware.
by bangaladore
1 subcomments
- I love that the JS loads so slow on first load that it just says "The magic number: 0 /tflops"
- 288GB RAM on board, and RISC V processors to enable the option for offloading inference from the host machine entirely.
It sounds nice, but how much is it?
- An FP8 performance of 3200TFLOPS is impressive, could be used for training as well as inference. "Close to theory efficiency" is a bold statement. Most accelerators achieve 60-80% of theoretical peak; if they're genuinely hitting 90%+, that's impressive. Now let's see the price.
- hey, we (ZML) happen to know them very well. they are incredible.
- Esperanto tried to do the same but went out of business.
https://www.esperanto.ai/products/
by randomgermanguy
0 subcomment
- The fact that I have to give them an email for details just feels immediately like a B2B-scam.
Hope they can figure out software, but what im seeing isn't super-promising
by cherryteastain
0 subcomment
- Hopefully they do better than UK's Graphcore who seem to be circling the drain
- I'll believe it when I see it wishing them the best!
> To streamline development and shorten time-to-market, VSORA embraces industry standards: our toolchain is built on LLVM and supports common frameworks like ONNX and PyTorch, minimizing integration effort and customer cost.
- One has got to love the fact hat you only get more information if you submit your email address.
by numbers_guy
2 subcomments
- Does anyone know why they brand it an "inference chip"? Is it something at the hardware level that makes is unsuitable for training, or is it simply that the toolchain for training is massively more complicated to program?
- > This is not just faster inference. It’s a new foundation for AI at scale.
Did they generate their website with their own chips or on Nvidia hardware?
- How does this compare to Euclyds product (another new EU AI chip company)?
https://euclyd.ai/
- [dead]
- reminds me of the famous tachyum prodigy vapourware https://www.tachyum.com/
by postexitus
2 subcomments
- Even if it's not vapourware, the website makes it look like one. Just look at those two graphs titled "Jotunn 8 Outperforms the Market" and "More Speed For the Bucks" (!) ; WTH?