What this statement is missing is that aaa coverage is immediately resolved, while msaa coverage is resolved later in a separate step with extra data being buffered in between. This is important because msaa is unbiased while aaa is biased towards too much coverage once two paths partially cover the same pixel. In other words aaa becomes incorrect once you draw overlapping or self-intersecting paths.
Think about drawing the same path over and over at the same place: aaa will become darker with every iteration, msaa is idempotent and will not change further after the first iteration.
Unfortunately, this is a little known fact even in the exquisite circles of 2D vector graphics people, often presenting aaa as the silver bullet, which it is not.
Open-source vector engine with GPU backends (WebGPU, OpenGL), runs on microcontrollers to browsers. Now a Linux Foundation project.
https://github.com/thorvg/thorvg
(Disclosure: CTO at LottieFiles, we build and maintain ThorVG in-house, with community contributions from individuals and companies like Canva)
It should probably mention that that this is only sufficient for some use cases but not for high quality ones.
E.g. if you were to use this e.g. for rendering font glyphs into something like a static image (or a slow rolling title/credits) you probably want a higher quality filter.
[1] https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007...
In fact, you could likely use the geometry stage to create arbitrarily dense vertices based on path data passed to the shader without needing any new GPU features.
Why is this not done? Is the CPU render still faster than these options?
The best way to draw a circle on a GPU is to start with a large triangle, and keep adding additional triangles on the edges until you've reached the point where you do not need to add any more triangles (smaller than a pixel)
There is some context in this 13-year-old discussion: https://news.ycombinator.com/item?id=5345905#5346541
I am curious if the equation of CPU-determined graphics being faster than being done on the GPU has changed in the last decade.
Did Quartz 2D ever become enabled on macOS?
NV_path_rendering solved this in 2011. https://developer.nvidia.com/nv-path-rendering
It never became a standard but was a compile-time option in Skia for a long time. Skia of course solved this the right way.