I'm not sure if it is HN-crowd type material since it is easy enough information for most of us to dig up, assuming we didn't already know it. Yet it does not simplify things to the point of, "technology is magic."
This is the scariest part to me:
> A pull request (https://github.com/jamespfennell/xz/pull/2) to a go library by a 1Password employee is opened asking to upgrade the library to the vulnerable version
https://oxide-and-friends.transistor.fm/episodes/discovering...
Europe should have an equivalent scheme for programmers of important Open Source projects such as this one.
But in the video itself, they show that the actual ssh time was about 100 ms and the new time it took was about 600 ms. It is almost 6 times the actual time. I am expecting the performance of the benchmark to significantly drop with these times. And it should be obvious to see that something was wrong.
( I am taking nothing from Andres here. I think he's a brilliant engineer to actually find the root cause of this himself. He is a hero. I am just pointing that 500 ms is not something obscure time interval).
...and yet, zero mention of systemd's recommendation for programs to link in the libsystemd kitchen sink just to call sd_notify() (which should really be its own library)
...and no mention of why systemd felt the need to preemptively load compression libraries, which it only needs to read/write compressed log files, even if you don't read/write log files at all? Again, it's a whole independent subsystem that could be its own library.
The video showed that xz was a dependency of OpenSSH. It showed on screen, but never said aloud, that this was only because of systemd. Debian/Redhat's sshd [0] was started with systemd and they added in a call to the sd_notify() helper function (which simply sends a message to the $NOTIFY_SOCKET socket), just to inform systemd of the exact moment sshd is ready. This loads the whole of libsystemd. That loads the whole of liblzma. Since the xz backdoor, OpenSSH no longer uses the sd_notify() function directly, it writes its own code to connect to $NOTIFY_SOCKET. And the sd_notify manpage begrudgingly gives a listing of code you can use to avoid calling it, so if you're an independent program with no connection to systemd, you just want to notify it you've started... you don't need to pull in the libsystemd kitchen sink. As it should've been in the first place.
Is the real master hacker Lennart Poettering, for making sure his architectural choices didn't appear in this video?
[0]: as an aside, the systemd notification code is only in Debian, Redhat et al because OpenSSH is OpenBSD's fork of Tatu Ylönen's SSH, which went on to become proprietary software. systemd is Linux-only and will never support OpenBSD, so likewise OpenBSD don't include any lines of code in OpenSSH to support systemd. Come to think of it, "BSD" is another thing they don't mention in the script, despite mentioning the AT&T lawsuit (https://en.wikipedia.org/wiki/USL_v._BSDi)
The technical explanations are way too complex (even though they're "dumbed down" somewhat with the colour mixing scenario), that anyone who understands those will also know about how dependencies work and how Linux came to be.
It feels almost like it's made for people like my mum, but it will lose them almost immediately at the first mention of complex polynomials.
The actual weight of the situation kinda lands though, and that's important. It's really difficult to overstate how incredibly lucky we were to catch it, and how sophisticated the attack actually was.
I'm really sad that we will genuinely never know who was behind it, and anxious that such things are already in our systems.
(But also, my conspiratorially-inclined mind is quite entertained by the thought of some sort of parallel construction or tip from a TLA.)
Why are build scripts not operating in a clean directory, stripping away all test related files?
Isn't this something we should begin to consider doing, seen that it's all too easy to put arbitrary things in test files (you can just pretend stuff is "fuzzed" or "random" or "test vectors" and whatnots: there's always going to be room to hide mischief in test files)?
Like literally building, but only after having erased all test directories/files/data.
Or put it this way: how many backdoors are actually live but wouldn't be if every single build was only done after carefully deleting all the irrelevant files related to tests?