So the upshot of the fact that vulnerabilities decay exponentially is that the focus should be on net-new code. And spending effort on vast indiscriminate RiiR projects is a poor use of resources, even for advancing the goal of maximal memory safety. The fact that the easiest strategy, and the strategy recommended by all pragmatic rust experts, is actually also the best strategy to minimize memory vulnerabilities according to the data is notably convergent if not fortuitous.
> The Android team has observed that the rollback rate of Rust changes is less than half that of C++.
Wow!
It stands to reason, then, that it would be even better for security to stop adding new features when they aren't absolutely necessary. Windows LTSC is presumably the most secure version of Windows.
There is more than one possible and reasonable explanation for this correlation:
1. New code often relates to new features, and folks focus on new features for vulnerabilities. 2. Older code has been through more real life usage, which can exercise those edge cases where memory vulnerabilities reside.
I’m just not comfortable saying new code causes memory vulnerabilities and that vulnerabilities have a half-life that decays rapidly. That may —- may be true in sheer number count, but doesn’t seem to be true in impact, thinking back to the high-impact vulnerabilities in OSS like the heartbleed bug, and the cache-invalidation bugs for CPUs.
> The Android team has observed that the rollback rate of Rust changes is less than half that of C++.
I've been writing high-scale production code in one language or another for 20 years. But I when I found Rust in 2016 I knew that this was the one. I was going to double-down on this. I got Klabnik and Carol's book literally the same day. Still have my dead-tree copy.
It's honestly re-invigorated my love for programming.
Amazing, I've never seen this argument used to support shift/left secure guardrails but it's great. Especially for those with larger, legacy codebases who might otherwise say "why bother, we're never going to benefit from memory-safety on our 100M lines of C++."
I think it also implies any lightweight vulnerability detection has disproportionate benefit -- even if it was to only look at new code & dependencies vs the backlog.
It's far more common to look at recent commit logs than it is to look at some library that hasn't changed for 20 years.
So if this blog post describes the 4th generation, perhaps the 5th generation looks something like Lockdown Mode for iOS. Let users who are concerned with security check a box that improves their security, in exchange for decreased performance. The ideal checkbox detects and captures any attack, perhaps through some sort of virtualization, then sends it to the security team for analysis. This creates deterrence for the attacker. They don't want to burn a scarce vulnerability if the user happens to have that security box checked. And many high-value targets will check the box.
Herd immunity, but for software vulnerabilities instead of biological pathogens.
Security-aware users will also tend to be privacy-aware. So instead of passively phoning home for all user activity, give the user an alert if an attack was detected. Show them a few KB of anomalous network activity or whatever, which should be sufficient for a security team to reconstruct the attack. Get the user to sign off before that data gets shared.
The reduction of memory safety bugs to a projected 36 in 2024 for Android is extremely impressive.
...
In the final year of our simulation, despite the growth in memory-unsafe code, the number of memory safety vulnerabilities drops significantly, a seemingly counterintuitive result [...]
Why would this be counterintuitive? If you're only touching the memory-unsafe code to fix bugs, it seems obviously that the number of memory-safety bugs will go down.
Am I missing something?
They are eventually forced to transition to a new language, which makes the memory safety bugs moot. Without addressing the fact that they're still sub-par, or why they were to begin with, why they didn't use the memory safe functions, why we let them ship code to begin with.
They go on to make more sub-par code, with more avoidable security errors. They're just not memory safety related anymore. And the hackers shift their focus to attack a different way.
Meanwhile, nobody talks about the pink elephant in the room. That we were, and still are, completely fine with people writing code that is shitty. That we allow people to continuously use the wrong methods, which lead to completely avoidable security holes. Security holes like the injection attacks, which make up 40% of all CVEs now, when memory safety only makes up 25%.
Could we have focused on a default solution for the bigger class of security holes? Yes. Did we? No. Why? Because none of this is about security. Programmers just like new toys to play with. Security is a red herring being used to justify the continuation of allowing people to write shitty code, and play with new toys.
Security will continue to be bad, because we are not addressing the way we write software. Rather than this one big class of bugs, we will just have the million smaller ones to deal with. And it'll actually get harder to deal with it all, because we won't have the "memory safety" bogey man to point at anymore.