This is wrong to an extent.
This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.
Their incentive is to be the first to publish a blog post about a cool new attack that they discovered and that their solution can prevent.
If you instead decide that the Upload Queue can't be circumvented, now you're increasing the duration a patch for a CVE is visible. Even if the CVE disclosure is not made public, the patch sitting in the Upload Queue makes it far more discoverable.
Best as I can tell, neither one of these fairly obvious issues are covered in this blog post, but they clearly need to be addressed for Upload Queues to be a good alternative.
--
Separately, at least with NPM, you can define a cooldown in your global .npmrc, so the argument that cooldowns need to be implemented per project is, for at least one (very) common package manger, patently untrue.
# Wait 7 days before installing > npm config set min-release-age 7
We need to revitalize research into capabilities-based security on consumer OSs, which AFAIK is the only thing that solves this problem. (Web browsers - literally user “agents” - solve this problem with capabilities too: webapps get explicit access to resources, no ambient authority to files, etc.)
Solving this problem will only become more pressing as we have more agents acting on our behalf.
Dependency cooldowns are how you can improve your security on an individual level. Using them does not make you a free rider any more than using Debian instead of Ubuntu instead of Arch does. Different people/companies/machines have different levels of acceptable risk - cooldowns let you tune that to your use case. Using open source software does not come with a contract or responsibility for free, implicit pentesting.
Upload queues are how a package manager/registry can collectively improve security for it's users. I cannot implement an upload queue for just me - the value comes from it being done in a centralized way.
I'm in favor of both, though hopefully with upload queues the broader practice of long dependency cooldowns would become more limited to security-focused applications.
Servants! Just do your open source magic, We're impatient! Ah and thanks for all the code, our hungry hungry LLMs were starving.
Dependency cooldowns, like staged update rollouts, mean less brittleness / more robustness in that not every part of society is hit at once. And the fact that cooldowns are not evenly distributed is a good thing. Early adopters and vibe coders take more chances, banks should take less.
But yeah, upload queues also make sense. We should have both!
The main reason for the cooldown is so security companies can find the issues, not that unwitting victims will find them.
One problem of the central cooldown is that it restricts the choice to be able to consume a package immediately, and some people might think that a problem.
I'd argue for intentional dependency updates. It just so happens that it's identified in one sprint and planned for the next one, giving the team a delay.
First of all, sometimes you can reject the dependency update. Maybe there is no benefit in updating. Maybe there are no important security fixes brought by an update. Maybe it breaks the app in one way or another (and yes, even minor versions do that).
After you know why you want to update the dependency, you can start testing. In an ideal world, somebody would look at the diff before applying this to production. I know how this works in the real world, don't worry. But you have the option of catching this. If you automatically update to newest you don't have this option.
And again, all these rituals give you time - maybe someone will identify attacks faster. If you perform these rituals, maybe that someone will be you. Of course, it is better for the business to skip this effort because it saves time and money.
It seems to me that many organizations are relying on other companies to do their auditing in any case, why not just admit that and explicitly rely on that? Choose who you trust, accept their audits. Organizations can perform or even outsource their own auditing and publish that.
That means there's an incentivised slot in the ecosystem for a group of package consumers who are motivated to find security problems quickly. It's not all on the wider development community.
They are already complex beasts of software, extremely important for the ecosystems, and not always well funded. Adding all this extra complexity, with official bypasses (for security reasons), monitoring APIs (for security review while a new version is in the queue), and others is not cheap.
And if somehow, they get the funding to do this, will they also get the funding for the maintenance in the long term?
I don't think the benefits here (which is only explicitly model the cooldown) are enough to offset the downsides.
However, a randomized cooldown may be a good idea. To borrow a pandemic term, it flattens the curve.
Avg tech company: "that's perfect, we love to be free riders."
Ever decided to not buy some new technology or video game or product right away and to wait and see if it’s worth it? You’re an immoral freeloader benefiting from the suffering of others who bought it right away.
If you're not doing the work yourself, it makes sense to give the people who review and test their dependencies some time to do their work.
Is the idea I’d point my security scanner at preview.registry.npmjs.org/ and npmjs.org would wait 7 days before the package would publish on the main registry?
We used to focus more on finding issues before a new release, and while it remains common to find bugs in older ones, not having enough users should not be used as a crutch for testing.
> (dependency cooldowns) don't address the core issue: publishing and distribution are different things and it's not clear why they have to be coupled together.
Besides some edge cases for a large project, the core issue remains code quality and maintainability practices. The rush to push several patches per day is insane to me, especially in current AI ecosystem.
Breaking changes used to have enough transitionary period, see Python 2 to 3, while today it is done on a whim, even by SaaS folks who should provide better DX for their customers. Regardless, open-source/source-available projects now expect more from their users, and I wonder how much of it remains reasonable.
Which, honestly, I think it is fair to say that a lot of supply chains are lulling people into a false sense of what they do. Your supply chain for groceries puts a lot of effort into making itself safe. Your supply chain for software dependencies is run more like a playground.
Think about how much cumulative human suffering must be experienced to bring you stable and effective products like this. Why hit the reset button right when things start getting good every time?
All else being equal, I'd rather the people who desire the new features be the earlier-adopters, because they're more likely to be the ones pushing for changes and because they're more likely to be watching what happens.
The real owner will (hopefully) notice when a malicious version is published.
If you use a cooldown then it gives the real owner of the account enough time to report the hack and get the malicious version taken down.
Have a normal path, eg days, a week or more (a month!). Have a selection of fast paths. Much shorter time. Days or even hours. Exceptions require higher trust. Indicators like money / reputation / history could be useful signals even if its only part of a paper trail. Treat exceptions as acceptable but requiring good reasons and explanation. This means a CVE fix from someone with high reputation could go through faster. While exceptions don't reduce the need for scrutiny they do enable clarity about the alternative chosen. Mainly because someone had to justify it away from the normal path. That's valuable in itself.
There's no perfection here. Credit cards and credentials get stolen. Reputation drifts since people change for all kinds of reasons.
Queues buy time. Time to find out. Time to back out.
But as others have noted, people having different cooldown settings means a nice staggered rollout.
I think the key is to differentiate testing from deployment: you don't need to run bleeding edge everywhere to find bugs and contribute. Even running nightly releases on one production instance will surface real problems.
idk if one of the touted benefits is really real - you need to be able to jump changes to the front of the queue and get them out asap sometimes.
hacked credentials will definitely be using that path. it gives you another risk signal sure, but the power sticks around
Good thing the internet is here to lecture me about all the secret obligations I have incurred by creating and using open source software!
Upload queues are better than cooldowns
I almost didn't read it because I wasn't interested in a rant. This is a genuinely good idea though so I'm glad I did.
Alas, I did click through so perhaps the title is more effective than my sentiments.
- One idea is for projects not to update each dep just X hours after release, but on their own cycles, every N weeks or such. Someone still gets bit first, of course, but not everyone at once, and for those doing it, any upgrade-related testing or other work also ends up conveniently batched.
- Developers legitimately vary in how much they value getting the newest and greatest vs. minimizing risk. Similar logic to some people taking beta versions of software. A brand new or hobby project might take the latest version of something; a big project might upgrade occasionally and apply a strict cooldown. For users' sake, there is value in any projects that get bit not being the widely-used ones!
- Time (independent of usage) does catch some problems. A developer realizes they were phished and reports, for example, or the issue is caught by someone looking at a repo or commit stream.
As I lamented in the other post, it's unfortunate that merely using an upgraded package for a test run often exposes a bunch of a project's keys and so on. There are more angles to attack this from than solely when to upgrade packages.
Here is one example
https://www.nuget.org/packages/System.CommandLine#versions-b...
2.0.6 was released less than a day ago. How long will you wait? I'd argue any wait is unwarranted.
It sounds nice to people because we are used to thinking in terms of Microsoft Windows and Microsoft SQL Server releases where people wait for months after a new version is released to update. Except companies actually pay for these! So somehow this kind of illogical action or I would argue learned helplessness that happens with flagship Microsoft product releases is what we are now advocating as the default everywhere which is a terrible idea.
Dependency cooldowns should NOT be the default. I don't know what a proper solution is but I know this isn't it.
But you’re not a “free-rider” if you intentionally let others leap before you. You’re just being cautious, which is rational behavior and should be baked into assumptions about how any ecosystem actually works.
"Free riding" is not the right term here. It's more a case of being the angels in the saying "fools rush in where angels fear to tread".
If the industry as a whole were mature (in the sense of responsibility, not age), upgrades would be tested in offline environments and rolled out once they pass that process.
Of course, not everyone has the resources for that, so there's always going to be some "free riding" in that sense.
That dilutes the term, though. Different organizations have different tolerance for risk, different requirements for running the latest stuff, different resources. There's always going to be asymmetry there. This isn't free riding.
No, nobody _has to_ implement it, and if only one did, then users who wanted cooldowns can migrate to that package manager.
Anyone in the IT Ops side of things knows the adage that you don't run ".0" software. You wait for a while to let the kinks get worked out by those who can afford the risk of downtime, and of the vendors to find and work out bugs in new software on their own.
Are conservative, uptime-oriented organizations "free-riders" for waiting to install new software on critical systems? Is that a sin, as this implies?
The answer is no. It's certainly a quandry - someone has to run it first. But a little time to let it bake in labs and low-risk environments is worth it.
But I get the point, it's a numbers game so any and all usage can help catching issues.
[1]: https://lobste.rs/s/dl4jb6/dependency_cooldowns_turn_you_int...
Snyk and socket.dev take money for the pain and suffering...
Me choosing to NOT download something places NO burden on anyone else. There is no logic by which you'll convince me otherwise.
You find attacks via cross-organization auditing, like you do in Linux distros, and this doesn't do that.
Early participation and beta programs are outsourcing careful engineering via making everybody else guinea pigs. If we want to sling around accusations of free-riding (really?!), you're slacking on testing and free-riding on your early users.
But alas.
Users who want take the extra precaution of waiting an additional period of time must decide to manually configure this with their tooling.
This practice has been a thing in the sysadmin community for years and years - most sysadmins know that you never install Windows updates on the day they release.
Having a step before publication means that's it's essentially opt-in pre-release software, and that comes with baggage - I have zero doubts that many entities who download packages to scan for malware explicitly exclude pre-release software, or don't discover it at all until it's released through normal channels.