- From the FAQ… doesn’t seem promising when they ask and then evade a crucial question.
> What is the memory bandwidth supported by Ascent GX10? AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.
- Seems this is basically DGX Spark with 1TB of disk so about $1000 bucks cheaper. DGX Spark has not been received well (at least online, Carmack saying it runs at half the spec, low memory bandwidth etc.) so perhaps this is way to reduce buyers regret, you are out only $3000 and not $4000 (with DGX Spark).
by dinkleberg
2 subcomments
- This is a tangent, but the little pop up example for their ai chat bot to try and entice me to use it was something along the lines of “what are the specs?”
How great would it be if instead of shoving these bots to help decipher the marketing speak they just had the specs right up front?
by sparkler123
1 subcomments
- I had one of these on pre-order/reservation from when they announced the DGX Spark and ended up returning it after a couple days. I thought I'd give it a shot, though. The 128GB of unified memory was the big selling point (as are any of the DGX Spark boxes), but the memory bandwidth was very disappointing. Being able to load a 100B+ parameter model was cool in terms of novelty but not particularly great for local inferencing.
Also, NVIDIA's software they have you install on another machine to use it is garbage. They tried to make it sort of appliance-y but most people would rather just have SSH work out of the box and can go from there. IMO just totally unnecessary. The software aspect was what put me over the edge.
Maybe the gen 2 will be better, but unless you have a really specific use case that this solves well, buy credits or something somewhere else.
by mindcrash
2 subcomments
- ServeTheHome has already benchmarked the DGX Spark architecture against the (very obvious) Ryzen AI Max 395+ with 128G RAM:
https://www.servethehome.com/nvidia-dgx-spark-review-the-gb1...
If (and in case of Nvidia that's a big if at the moment) they get their software straight on Linux for once this piece of hardware seems to be something to keep an eye on.
by canucker2016
0 subcomment
- Dell and Lenovo have product pages for their versions of the DGX Spark.
Dell:
https://www.dell.com/en-us/shop/desktop-computers/dell-pro-m...
- $3,998.99 4TB SSD
- $3,699.00 2TB SSD
Lenovo:
https://www.lenovo.com/us/en/p/workstations/thinkstation-p-s...
- $3,999.00 4TB SSD
https://www.lenovo.com/us/en/p/workstations/thinkstation-p-s...
- $3,539.00 1TB SSD
by WhitneyLand
4 subcomments
- GX10 vs MacBook Pro M4 Max:
- Price: $3k / $5k
- Memory: same (128GB)
- Memory bandwidth: ~273GB/s / 546GB/sec
- SSD: same (1 TB)
- GPU advantage: ~5x-10x depending on memory bottleneck
- Network: same 10Gbe (via TB)
- Direct cluster: 200Gb / 80Gb
- Portable: No / Yes
- Free Mac included: No / Yes
- Free monitor: No / Yes
- Linux out of the box: Yes / No
- CUDA Dev environment: Yes : No
by embedding-shape
4 subcomments
- I wonder why they even added this to the FAQ if they're gonna weasel their way around it and not answer properly?
> What is the memory bandwidth supported by Ascent GX10?
> AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.
Never seen anything like that before. I wonder if this product page is actually done and was ready to be public?
by joelthelion
6 subcomments
- "Nvidia dgx os", ugh. It would be a lot more enticing if that thing could run stock Linux.
by nycdatasci
1 subcomments
- I ordered one that arrived last week. It seems like a great idea with horrible execution. The UI shows strange glitchy/artifacts occasionally as if there's a hardware failure.
To get a sense for use cases, see the playbooks on this website: https://build.nvidia.com/spark.
Regarding limited memory bandwidth: my impression is that this is part of the onramp for the DGX Cloud. Heavy lifting/production workloads will still need to be run in the cloud.
by brian_herman
3 subcomments
- Couldn't you buy a Mac Ultra with more memory for the same price?
- One past related thread. Any others?
The Asus Ascent GX10 a Nvidia GB10 Mini PC with 128GB of Memory and 200GbE - https://news.ycombinator.com/item?id=43425935 - March 2025 (50 comments)
Edit: added via wmf's comment below:
"DGX Spark has only half the advertised performance" - https://news.ycombinator.com/item?id=45739844 - Oct 2025 (24 comments)
Nvidia DGX Spark: When benchmark numbers meet production reality - https://news.ycombinator.com/item?id=45713835 - Oct 2025 (117 comments)
Nvidia DGX Spark and Apple Mac Studio = 4x Faster LLM Inference with EXO 1.0 - https://news.ycombinator.com/item?id=45611912 - Oct 2025 (20 comments)
Nvidia DGX Spark: great hardware, early days for the ecosystem - https://news.ycombinator.com/item?id=45586776 - Oct 2025 (111 comments)
NVIDIA DGX Spark In-Depth Review: A New Standard for Local AI Inference - https://news.ycombinator.com/item?id=45575127 - Oct 2025 (93 comments)
Nvidia DGX Spark - https://news.ycombinator.com/item?id=45008434 - Aug 2025 (207 comments)
Nvidia DGX Spark - https://news.ycombinator.com/item?id=43409281 - March 2025 (10 comments)
- My hope was to find a system which does ASR, then LLM processing with MCP use and finally TTS: "Put X on my todo list" / "Mark X as done" -> LLM thinks, reads the todo list, edits the todo list, and tells me "I added X to your todo list", ... "Turn all the lights off" -> llm thinks and uses MCP to turn off the lights -> "Lights have been turned off". "Send me an email at 8pm reminding me to do" .... "Email has been scheduled for 8pm"
That's all I want. It does not have to be fast, but it must be capable of doing all of that.
Oh, and it should be energy efficient. Very important for a 24/7 machine.
by irusensei
1 subcomments
- Why is every computer listing nowadays look the same with the glowing golden and blue chip images and the dynamic images that appear when you scroll down.
Please give me a good old html table with specs will ya?
- These are primarily useful for developing CUDA targeted code on something that sits on your desk and has a lot of RAM.
They're not the best choice for anyone who wants to run LLMs as fast and cheap as possible at home. Think of it like a developer tool.
These boxes are confusing the internet because they've let the marketing teams run wild (or at least the marketing LLMs run wild) trying to make them out to be something everyone should want.
by whatever1
4 subcomments
- Any good ideas for what these can be used for?
I am still trying to think a use case that a Ryzen AI Max/MacBook or a plain gaming gpu cannot cover.
by simlevesque
1 subcomments
- I really wish I had the kind of money to try my hands at it.
- Is there something similar with twice the memory/bandwidth? That's a use case that I would seriously consider to run any frontier open source model locally, at usable speed. 128GB is almost enough.
by maxbaines
3 subcomments
- Looks like a pretty useful offering, 128Gb Memory Unified, with the ability to be chained. IN the Uk release price looks to be £2999.99 Nice to see AI Inference becoming available to us all, rather than using a GPU ..3090etc.
https://www.scan.co.uk/products/asus-ascent-gx10-desktop-ai-...
- This bit of the FAQ was such a non-answer to their own FAQ, you really have to wonder:
>> What is the memory bandwidth supported by Ascent GX10?
> AI applications often require a bigger memory. With the NVIDIA Blackwell GPU that supports 128GB of unified memory, ASUS Ascent GX10 is an AI supercomputer that enables faster training, better real-time inference, and support larger models like LLMs.
- Funny to wakeup and see this on the front page - I literally just bought a pair last night for work (and play) somewhat on a whim, after comparing the available models. This one was available the soonest & cheapest, CDW is giving 100 off even, so 2900 pre tax.
by jauntywundrkind
0 subcomment
- Really interested to see if anyone starts using the fancy high end Connect-X 7 NIC in these DGX Spark / GB10 derived systems. 200Gbit RDMA is available & would be incredible to see in use here.
- Which models will this be able to run at an acceptable token/s rate?
- What a shame. This would have been a much more powerful machine if it was wrapped around AMD products.
At least with this, you get to pay both the Nvidia and the Asus tax!
- Does anyone have any information on how much this will cost? Or is it one of those products where if you have to ask you can't afford it.
- These AI boxes resemble gaming consoles in both form factor and architecture, makes me curious if they could make good gaming machines.
by NSUserDefaults
0 subcomment
- Really looking forward to getting this used for $50 in 6 years just for kicks.
- How much does that thing cost? I don't see a price on the page.
- Memory bandwidth is a joke. You would think by now somebody would come out with a well balanced machine for inference instead of always handicapping one of the important aspects. Feels like a conspiracy.
At least the m5 ultra should finally balance things given the significant improvements to prompt processing in the m5 from what we've seen. Apple has had significantly higher memory bandwidth since the m1 series approaching 5 years old now. Surely an nvidia machine like this could have at bare minimum 500Gb+ if they cared in the slightest about competition.
- > and support larger models like LLMs
To turn your petaFLOP into petaSLOP.
- I was really hyped about this, but then I watched videos and it's just meh.
What is the purpose of this thing?
by frogperson
0 subcomment
- That is a seriously infuriating webste, at least on mobile anyway.
- is this another product they're pushing out for publicity. I mean how much testing has been done for this product. Need more specs and testing results to illuminate capabilities, practicality.
- If you touch the image when scrolling on mobile then it opens when you lift your finger. Then when you press the cross in the corner to close the image, the search button behind it is activated.
How can a serious company not notice these glaring issues in their websites?
- These very narrow speed measurements are getting out of hand:
1 petaFLOP using FP4, that's 4 petaFLOPS using FP1 and infinite petaFLOPS using FP0.