Also, Vulkan is labeled as Open Source. It is not open source.
The are other mistakes in that area as well. It claims WebGPU is limited to Browsers. It is, not. WebGPU is available as both a C++ (Dawn) and a Rust (WGPU) library. Both run on Windows, MacOS, Linux, iOS, and Android. It is arguably the most cross platform library. Tons of native projects using both libraries.
Vulkan is also not really cross-platform any more than DirectX . DirectX runs on 2 platforms (listed in the aritcle). Vulkan runs on 2+ platforms, Android, and Linux. It runs on Windows but not on all windows. For example, in a business context using Remote Desktop, Vulkan is rarely available. It is not part of Windows, and is not installed by default. Graphics card companies (NVidia, AMD) include it. Windows itself does not. Vulkan also does not run on MacOS nor iOS
* shadertoy - in-browser, the most popular and easiest to get started with
* Shadron - my personal preference due to ease of use and high capability, but a bit niche
* SHADERed - the UX can take a bit of getting used to, but it gets the job done
* KodeLife - heard of it, never tried it
Does anyone have a good resource for the stages such as:
- What kind of data formats do I design to pipe into the GPU? Describe like i'm five texcels, arrays, buffers, etc.
- describe the difference between the data formats of traditional 3D workflow and more modern compute shader data formats
- Now that I supplied data, obviously I want to supply transformations as well. Transforms are not commutative, so it implies there is sequential state in which transforms are applied which seems to contradict this whole article
- The above point is more abstractly part of "supplying data into the GPU at a later stage". Am I crossing the CPU-GPU boundary multiple times before a frame is complete? If so describe the process and how/why.
- There is some kind of global variable system in GPUs. explain it. List every variable reachable from a shader fragment program
For example if you want to draw a square with a pen, you put your pen where the square is, draw the outlines, than fill it up, with a shader, for each pixel, you look at where you are, calculate where the pixel is relative to the square, and output the fill color if it is inside the square. If you want to draw another square to the right, with the pen, you move your pen to the right, but with the shader, you move the reference coordinates to the left. Another way to see it is that you don't manipulate objects, you manipulate the space around the objects.
Vertex shaders are more natural as the output is the position of your triangles, like the position of your pen should you be drawing on paper.
There are a few tiny conceptual things that maybe could smooth out and improve the story. I’m a fan of keeping writing for newcomers simple and accessible and not getting lost in details trying to be a Wikipedia on the subject, so I don’t know how much it matters, take these as notes or just pedantic nerdy nitpicks that you can ignore.
Shaders predate the GPU, and they run perfectly fine on CPU, and they’re used for ray tracing, so summarizing them as GPU programs for raster doesn’t explain what they are at all. Similarly, titling this as using an “x y coordinate” misses the point of shading, which is at it’s most basic to figure out the color of a sample, if we’re talking fragment shaders. Vertex shaders are just unfortunately misnamed, they’re not ‘shading’ anything. In that sense, a shader is just a specific kind of callback function. In OpenGL you get a callback for each vertex, and a callback for each pixel, and the callback’s (shader’s) job is to produce the final value for that element, given whatever inputs you want. In 3d scenes, fragment shaders rarely use x y coordinates. They use material, incoming light direction, and outgoing camera direction to “shade” the surface, i.e., figure out the color.
The section on GPU is kind of missing the most important point about GPUs: the fact that neighboring threads share the same instruction. The important part is the SIMT/SIMD execution model. But shaders actually aren’t conceptually different from CPU programming, they are still (usually) a single-threaded programming model, not a SIMT programming model. They can be run in parallel, and shaders tend to do the same thing (same sequence of instructions) for every pixel, and that’s why they’re great and fast on the GPU, but they are programmed with the same sequential techniques we use on the CPU and they can be run sequentially on the CPU too, there are no special parallel programming techniques needed, nor a different mindset (as is suggested multiple times). The fact that you don’t need a different mindset in order to get massively parallel super fast execution is one of the reasons why shaders are so simple and elegant and effective.
This is already hard enough as it is, but GPU programming (at least in its current state) is an order of magnitude worse in my experience. Tons of ways to get tripped up, endless trivial/arbitrary things you need to know or do, a seemingly bottomless pit of abstraction that contains countless bugs or performance pitfalls, hardware disparity, software/platform disparity, etc. Oh right, and a near complete lack of tooling for debugging. What little tooling there is only ever works on one GPU backend, or one OS, or one software stack.
I’m no means an expert but I feel our GPU programming “developer experience” standards are woefully out of touch and the community seems happy to keep it that way.
I found the "What is a color space?" chapter even more interesting though, as it contains new things (for me).
It seems that only 3 of them are ready tho. I'm not sure why it asked me to enter 'license key' (of what?)... are they paywalled?
I skimmed it but didn't see any mention of "ray marching", which is raytracing done in a shader. GPUs are pretty fast now. You can just do that. However you do have to encode the scene geometry analytically in the shader - if you try to raytrace a big bag of triangles, it's still too slow. There's more info on this and other techniques at https://iquilezles.org/articles/