Me, I adjusted Atkinson a few years ago as I prefer the "blown out" effect: https://github.com/KodeMunkie/imagetozxspec/blob/master/src/...
A similar custom approach to prevent second pass diffusion is in the code too; it is slightly different implementation - processes the image in 8x8 pixel "attribute" blocks, where the error does not go out of these bounds. The same artifacts occur there too but are more distinct as a consequence. https://github.com/KodeMunkie/imagetozxspec/blob/3d41a99aa04...
Nb. 8x8 is not arbitrary, the ZX Spectrum computer this is used for only allowed 2 colours in every 8x8 block so this seeing the artifact on a real machine is less important as the whole image potentially had 8x8 artifacts anyway.
I made some impractical dithering algorithms a while ago, such as distributing the error to far away pixels or distributing more than 100% of the error: https://burkhardt.dev/2024/bad-dithering-algorithms/
Playing around with the distribution matrices and exploring the resulting patterns is great fun.
The reason is that the described approach will estimate the error correction term wrong as the input RGB value is non-linear sRGB.
The article doesn't mention anything about this so I assume the author is oblivious to what color spaces are and that an 8bit/channel RGB value will most likely not represent linear color.
This is not bashing the article; most people who start doing anything with color in CG w/o reading up on the resp. theory first get this wrong.
And coming up with your own dither is always cool.
See e.g. [1] for an in-depth explanation why the linearization stuff matters.
Side note on Lisp formatting: The author is doing a mix of idiomatic cuddling of parenthesis, but also some more curly-brace-like formatting, and then a cuddling of a trailing small term such that it doesn't line up vertically (like people sometimes do in other languages, like, e.g., a numeric constant after a multi-line closure argument in a timer or event handler registration).
One thing some Lisp people like about the syntax is that parts of complex expression syntax can line up vertically, to expose the structure.
For example, here, you can clearly see that the `min` is between 255 and this big other expression:
(define luminance
(min (exact-round (+ (* 0.2126 (bytes-ref pixels-vec (+ pixel-pos 1))) ; red
(* 0.7152 (bytes-ref pixels-vec (+ pixel-pos 2))) ; green
(* 0.0722 (bytes-ref pixels-vec (+ pixel-pos 3))))) ; blue
255))
Or, if you're running out of horizontal space, you might do this: (define luminance
(min (exact-round
(+ (* 0.2126 (bytes-ref pixels-vec (+ pixel-pos 1))) ; red
(* 0.7152 (bytes-ref pixels-vec (+ pixel-pos 2))) ; green
(* 0.0722 (bytes-ref pixels-vec (+ pixel-pos 3))))) ; blue
255)))
Or you might decide those comments should be language, and do this: (define luminance
(let ((red (bytes-ref pixels-vec (+ pixel-pos 1)))
(green (bytes-ref pixels-vec (+ pixel-pos 2)))
(blue (bytes-ref pixels-vec (+ pixel-pos 3))))
(min (exact-round (+ (* red 0.2126)
(* green 0.7152)
(* blue 0.0722)))
255)))
One of my teachers would still call those constants "magic numbers", even when their purpose is obvious in this very restricted context, and insist that you bind them to names in the language. Left as an exercise to the reader.Surface-stable fractal dithering explained
There's a follow-up video to that one.
* https://github.com/racket/racket/commit/6b2e5f4014ed95c9b883...
* https://github.com/racket/racket/commit/f2a1773422feaa4ec112...
https://forums.tigsource.com/index.php?topic=40832.msg136374...
I'm still trying to improve it a little. https://git.ache.one/dither/tree/?h=%f0%9f%aa%b5
I didn't published it because it's hard to actually put dithered images on the web, you can't resize a dithered image. So on the web, you have to dither it on the fly. It's why, in the article, there is some artifacts in the images. I still need to learn about dithering.
Reference: https://sheep.horse/2022/12/pixel_accurate_atkinson_ditherin...
Cool links about dithering: - https://beyondloom.com/blog/dither.html - https://blog.maximeheckel.com/posts/the-art-of-dithering-and...
(for* ([i height]
[j width])
...)
I've always wondered about this. Sure, if you're changing the contrast then that's a subjective change.
But it's easy to write a metric to confirm the degree to which brightness and contrast are maintained correctly.
And then, is it really impossible to develop an objective metric for the level of visible detail that is maintained? Is that really psychovisual and therefore subjective? Is there really nothing we can use from information theory to calculate the level of detail that emerges out of the noise? Or something based on maximum likelihood estimation?
I'm not saying it has to be fast, or that we can prove a particular dithering algorithm is theoretically perfect. But I'm surprised we don't have an objective, quantitative measure to prove that one algorithm preserves more detail than another.
Also I think the final result has some pretty distracting structured artifacts compared to e.g. blue noise dithering.
Of course, unless you are trying to implement something completely insane like Surface-Stable Fractal Dithering https://www.youtube.com/watch?v=HPqGaIMVuLs