This isn't strictly accurate: when we designed Trusted Publishing for PyPI, we designed it to be generic across OIDC IdPs (typically CI providers), and explicitly included an accommodation for creating new projects via Trusted Publishing (we called it "pending" publishers[1]). The latter is something that not all subsequent adopters of the Trusted Publishing technique have adopted, which is IMO both unfortunate and understandable (since it's a complication over the data model/assumptions around package existence).
I think a lot of the pains here are self-inflicted on GitHub's part here: deciding to remove normal API credentials entirely strikes me as extremely aggressive, and is completely unrelated to implementing Trusted Publishing. Combining the two together in the same campaign has made things unnecessarily confusing for users and integrators, it seems.
[1]: https://docs.pypi.org/trusted-publishers/creating-a-project-...
I had 25 million downloads on NPM last year. Not a huge amount compared to the big libs, but OTOH, people actually use my stuff. For this I have received exactly $0 (if they were Spotify or YouTube streams I would realistically be looking at ~$100,000).
I propose that we have two NPMs. A non-commercial NPM that is 100% use at your own risk, and a commerical NPM that has various guarantees that authors and maintainers are paid to uphold.
We were left with a tough choice of moving to Trusted Publishers or allowing a few team members to publish locally with 2FA. We decided on Trusted Publishers because we've had an automated process with review steps for years, but we understand there's still a chance of a hack, so we're just extremely cautious with any PRs right now. Turning on Trusted Publishers was a huge pain with so many package.
The real thing we want for publishing is for us is to be able to continue to use our CI-based publishing setup, with Trusted Publishers, but with a human-in-the-loop 2FA step.
But that's only part of a complete solution. HITL is only guaranteed to slow down malicious code propagating. It doesn't actually protect our project against compromised dependencies, and doesn't really help prevent us from spreading it. All of that is still a manual responsibility of the humans. We need tools to lock down and analyze our dependencies better, and tools to analyze our our packages before publishing. I also want better tools for analyzing and sandboxing 3rd party PRs before running CI. Right now we have HITL there, but we have to manually investigate each PR before running tests.
The irony is that trusted publishing pushes everyone toward GitHub Actions, which centralizes risk. If GHA gets compromised, the blast radius is enormous. Meanwhile solo devs who publish from their local machine with 2FA are arguably more secure (smaller attack surface, human in the loop) but are being pushed toward automation.
What I'd like to see: a middle ground where trusted publishing works but requires a 2FA confirmation before the publish actually goes through. Keep the automation, keep the human gate. Best of both worlds.
The staged publishing approach mentioned in the article is a good step - at least you'd catch malicious code before it hits everyone's node_modules.
In this scenario, if a dependency were to add a "postinstall" script because it was compromised, it would not execute, and the user can review whether it should, greatly reducing the attack surface.
The only tricky bit would be to disallow approval own pull request when using trusted publishing. That should fall back to requiring 2FA
The software is too fine-grained. Too many (way too many) packages from small projects or obscure single authors doing way too many things that are being picked up for one trivial feature. That's just never going to work. If you don't know who's writing your software the answer will always end up being "Your Enemies" at some point.
And the solution is to stop the madness. Conglomerate the development. No more tiny things. Use big packages[1] from projects with recognized governance. Audit their releases and inclusion in the repository from a separate project with its own validation and testing. No more letting the bad guys push a button to publish.
Which is to say: this needs to be Debian. Or some other Linux distro. But really the best thing is for the JS community (PyPI and Cargo are dancing on the edge of madness too) to abandon its mistake and move everything into a bunch of Debian packages. Won't happen, but it's the solution nonetheless.
[1] c.f. the stuff done under the Apache banner, or C++ Boost, etc...
Simple enough, things like npm and pip reinvent a naming authority, have no cost associated (so it's weak to sybil attacks), all for not much, what do you get in exchange? You create equality by letting everyone contribute their wonderful packages, even those that don't have 15$/yr? I'm sorry was the previous leading internet mechanism not good and decentralized enough for you?
Java's package naming system is great in design, the biggest vuln in dependencies that I can think of on java was not a supply chain specific vuln, but rather a general weakness of a library (log4j). But maybe someone with more java experience can point to some disadvantage of the java system that explains why we are not all copying that
I understand things need to be safe, but this is a recklessly fast transition.
Go did a lot wrong. It was just awful before they added Go modules. But it’s puzzling to me to understand why as a community and ecosystem its 3rd party dependencies seem so much less bloated. Part of it I think is because the standard library is pretty expansive. Part of it is because of things like golang.org/x. But there’s also a lot of corporate maintainers - and I feel like part of that is because packages are namespaces to the repository - which itself is namespaced to ownership. Technically that isn’t even a requirement - but the community adopted it pretty evenly - and it makes me wonder why others haven’t.