Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug. In fact, there is an incentive not to upgrade when things are working, because it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc. And while the Linux kernel takes compatibility seriously, most distros do not and introduce compatibility-breaking changes with regularity. Binary compatibility is non-existent. Source compatibility is a crapshoot.
In contrast, you absolutely need to care about security bugs that allow people to run code on your system. So of course people want to treat security bugs differently from everything else and prioritize them.
Actually, some software are running the water-heater/heat-pump system in my basement. There is a small blue light screen, it keeps logs of consumed electricity/produced heat and can make small histograms. Of course there is a smart option to make it internet connected. The kind of functionality I’m glad it’s disabled by default and not enforced to be able to operate. If possible, I’ll never upgrade it. Release then go back to the cave has definitely its place in many actual physical product in the world.
I’ll deal with enough WTF software security in my daily job during my career. Sparing some cognitive load of whatever appliance being turned into a brick because the company that produced it or some script-kiddy-on-ai-steroid decided it was desirable to do so, that’s more time to do whatever other thing cosmos allows to explore.
Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?
The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.
Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?
There's no way the AI is a priori understanding codebases with millions of LoC now. We've tried that already, it failed. What it is doing now is setting up its own extremely powerful test harnesses and getting the information and testing it efficiently.
Sure, its semantic search is already strong, but the real lesson that we've learned from 2025 is that tooling is way more powerful.
That's cool! I've always wanted to learn how kernel devs properly test stuff reliably but it seemed hard. As someone who's dabbled in kernel dev for his job. Like real variable hardware, and not just manual testing shit.
Honestly, AI has only helped me become a better SWE because no one else has the time or patience to teach me.
What's the saying? Given many eyes, all bugs are shallow? Well, here are some more eyes.
It's hard for me to imagine how this wouldn't be true. This isn't the "new normal", everyone is just running it into the ground and wringing every drop they can out of it right now.
It would be interesting to "backtest" how much higher the rate of vulnerability discovery would have been if all these new vulnerabilities were discovered in near real time as they were created, since that would be more predictive of the "new normal", in my opinion. I suspect it's not very significant: we're flushing a 20+ year backlog, and generally the rate at which vulnerabilities are created is lower today.
Seems supported by this as well: https://www.first.org/blog/20260211-vulnerability-forecast-2...
Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.
For me, this seems something that would make sense for all dev community to push for.
Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.
Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.
Oh my sweet summer child.
This is some seriously delusional cope from someone who drank the entire jug of kool-aid.
I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.