by h4kunamata
16 subcomments
- >Requirements
>A will to live (optional but recommended)
>LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.
>Open an issue if theres anything you want to discuss. Or don't. I'm not your mum.
>Based in New Zealand
Oceania sense of humor is like no other haha
The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.
The cheer amount of knowledge required to even start such project, is really something else, and prove the manual wrong on the machine language level is something else entirely.
When it comes to AMD, "no CUDA support" is the biggest "excuse" to join NVIDIA's walled garden.
Godspeed to this project, the more competition the less NVIDIA can continue destroying the PC parts pricing.
- The first issue created by someone other than the author is from geohot himself.. the goat: https://github.com/Zaneham/BarraCUDA/issues/17
I would love to see these folks working together on this to break apart nvidia's strangehold on gpu market (which, according to internet, allows them to have an insane 70% profit margins, thereby, raising costs for all users, worldwide).
- > # It's C99. It builds with gcc. There are no dependencies.
> make
Beautiful.
- Wouldn't it funny and sad if a bunch of enthusiasts pulled off what AMD couldn't :)
- Not familiar with CUDA development, but doesn't CUDA support C++ ? Skipping Clang/LLVM and going "pure" C seems to be quite limiting in that case.
- Hah, the capitalization of the title of this post only just now made me realize why the GPU farm at my job is called "barracuda". That's pretty funny.
by ByThyGrace
2 subcomments
- How feasible is it for this to target earlier AMD archs down to even GFX1010, the original RDNA series aka the poorest of GPU poor?
- Is OpenCL a thing anymore? I sorta thought thats what is was supposed to solve.
But I digress, just a quick put around... I don't know what I'm looking at. But it's impressive.
by bravetraveler
0 subcomment
- > No HIP translation layer.
Storage capacity everywhere rejoices
- Perusing the code, the translation seems quite complex.
Shout out to https://github.com/vosen/ZLUDA which is also in this space and quite popular.
I got Zluda to generally work with comfyui well enough.
- This is likely supremely naive but I would think the lift in getting coverage for an entire library to a target hardware's native assembly is largely a matter of mapping/translating functions, building acceptance tests and benchmarking/optimization - all three of those feel like they should be greatly assisted by LLM augmented workflows.
by takeaura25
0 subcomment
- Running AI inference workloads on Nvidia GPUs , and the cost is a real pain point. Projects like this matter because GPU vendor lock-in directly affects what startups can afford to build. Would love to see how this performs on common inference ops like conv2d and attention layers.
by pyuser583
2 subcomments
- I was hoping AMD would keep making gaming cards, now that NVIDIA is an AI company. Somebody has to, right?
- > No LLVM. No HIP translation layer. No "convert your CUDA to something else first."
What is the problem with such approaches?
- <checks stock market activity>
- No, please no! AMD GPUs are still somewhat affordable. Does this mean their cards become compatible with CUDA based AI software? Don't ruin the market for desktop GPUs completely, please don't. AI is costing me hundreds of extra Euros in hardware already. I hate this so much.
by BatteryMountain
0 subcomment
- In the old days we had these kinds of wars with cpu instruction sets & extensions (SSE, MMX, x64,). In a way I feel that CUDA should be opened up & generalized so that other manufacturers can use it too, the same way cpu's equalled out on most intruction sets. That way the whole world won't be beholden to one manufacturer (Big Green) and would calm down the scarcity effect we have now. I'm not an expert on gpu tech, would this be something that is possible? Is CUDA a driver feature or a hardware feature?
- Nice! It was only a matter of time until someone broke Nvidia's software moat. I hope Nvidia's lawyers don't know where you live.
by sreekanth850
0 subcomment
- AMD should sponsor this. World need to get rid of this NVIDIA monopoly.
- Love to see just a simple compiler in C with a Makefile instead of some amalgamation of 5 languages 20 libraries and some autotools cmake shit.
- Note that this targets GFX11, which is RDNA3. Great for consumer, but not the enterprise (CDNA) level at all. In other words, not a "cuda moat killer".
by phoronixrly
4 subcomments
- Putting a registered trademark in your project's name is quite a brave choice. I hope they don't get a c&d letter when they get traction...
by whateverboat
0 subcomment
- I think ChipStar is better. less IP issues
- What's the benefit of this over tinygrad?
by quantumwoke
2 subcomments
- There's a lot of people in this thread that don't seem to have caught up with the fact that AMD has worked very hard on their cuda translation layer and for the most part it just works now, you can build cuda projects on amd just fine on modern hardware/software.
- Will this run on cards that don’t have ROCM/latest ROCM support? Because if not, its only gonna be a tiny subset of a tiny subset of cards that this will allow cuda to run on.
- Wow!! Congrats to you on launch!
Seeing insane investments (in time/effort/knowledge/frustration) like this make me enjoy HN!!
(And there is always the hope that someone at AMD will see this and actually pay you to develop the thing.. Who knows)
- Great work!
by mrdootdoot
1 subcomments
- I don’t understand the elitism about avoiding LLMs.
Good luck -
- See also: https://scale-lang.com/
Write CUDA code. Run Everywhere.
Your CUDA skills are now universal. SCALE compiles your unmodified applications to run natively on any accelerator, ending the nightmare of maintaining multiple codebases.
- [dead]
- [dead]