Sadly, as you can tell, they have not taken me up on my requests. Awesome that other people got it working!
The game benchmarks are fun but the LLM improvements are where this gets really interesting for practical use. I love Apple platforms as an approachable way to run local models with a lot of RAM, but their relatively slow prompt processing speed is often overlooked.
> Here you can see the big issue with Macs: the prompt processing (aka “prefill”) speed. It just gets worse and worse, the longer the prompt gets. At a 4K-token prompt, which doesn’t seem very long, it takes 17 seconds for the M4 MacBook Air to parse before we even start generating a response. Meanwhile, if you strap the eGPU to it, it’ll only take 150ms. It’s 120x faster.
The prefill problem goes unnoticed when you’re playing around with the LLM with small chats. When you start trying to use it for bigger work pieces the compute limit becomes a bottleneck.
The time to first token (TTFT) charts don’t look bad until you notice that they had to be shown on a logarithmic scale because the Mac platforms were so much slower than full GPU compute.
I understand that this is true it seems that Doom does support Vulkan but you would need to add VK_NV_glsl_shader to MoltenVK. Probably much less work than what went into hanging an RTX 5090 off of a M4. Still, kudos to the scott and the local AI Inference speeds are pretty cool. What a crazy project! <applause>
(EDIT: Apple agrees with my impression. “To use an eGPU, a Mac with an Intel processor is required.” And, on top of that, the officially supported eGPUs were all AMD not NVIDIA. https://support.apple.com/en-us/102363)
It'd be amazing if Apple would provide better support, and allow more than that 1.5 GB window to make this easier. Arm overall has some quirks with PCIe devices, but at least in Linux, it's gotten so much easier since most modern drivers treat arm64 as a first class citizen.
The problem is `max-num-seqs` and `max-model-len` fight each other, and unless you're in the pure single-client mode you'll need multiple slots so to speak.
Or, more likely, it will tell you something it doesn't know.
Reminds me of yesterday, when I was arguing with ChatGPT that the 5070TI was an actual video card. It kept trying to correct me by saying I must have meant a 4070ti, since no such 5070ti card exists.
I got Fallout 3 working on my M2 MBP as well as it did on Windows back in the day. Temps were cool, battery was decent. If they sold my college years gaming collection (15-ish years ago) in a way that ran natively through GoG or Steam, I'd buy every single title.
Of course the author probably did that as a joke.
Another part of me is almost annoyed that Apple's complete apathy toward obvious computing use cases like this is rewarded by a project like this. I feel like Macs and macOS should not be rewarded for being so difficult to extend and use outside of Apple's narrow vision of the use case of their hardware.
Apple used to support this use case wholeheartedly, but we can see that it's abandoned on their end: Intel-only, and the newest generation of AMD GPUs supported are the 6000 series: https://support.apple.com/en-us/102363
I got tired of rewarding Apple for refusing to make a computer that makes the most of the technology available. This stuff is all a lot worse than just moving over to Linux or even Windows. With hardware like the Framework 13 Pro coming out, along with a surprisingly good set of premium PC laptops, I really don't think the Mac hardware is worth it anymore. Others have legitimately caught up, especially with Apple's aging MacBook Pro chassis with the horrible notch.
Bingo. This is exactly how I use LLM. I like getting a gut check, seeing what the first recommendations are or if there is some deep flaw in what I think the approach is, and I almost never copy/paste whatever it spits back or just follow its instructions.
"no - not in any practical sense today, and "maybe" only in a very deep, borderline-impractical research sense."
This is why humans will always rule over crappy LLMs.
It’s these people, not the ones who refuse to use LLMs, who are as they say, “cooked”.