[1] "...Eran Hammer resigned from his role of lead author for the OAuth 2.0 project, withdrew from the IETF working group, and removed his name from the specification in July 2012. Hammer cited a conflict between web and enterprise cultures as his reason for leaving, noting that IETF is a community that is "all about enterprise use cases" and "not capable of simple". "What is now offered is a blueprint for an authorization protocol", he noted, "that is the enterprise way", providing a "whole new frontier to sell consulting services and integration solutions". In comparing OAuth 2.0 with OAuth 1.0, Hammer points out that it has become "more complex, less interoperable, less useful, more incomplete, and most importantly, less secure". He explains how architectural changes for 2.0 unbound tokens from clients, removed all signatures and cryptography at a protocol level and added expiring tokens (because tokens could not be revoked) while complicating the processing of authorization. Numerous items were left unspecified or unlimited in the specification because "as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide."
David Recordon later also removed his name from the specifications for unspecified reasons. Dick Hardt took over the editor role, and the framework was published in October 2012.
David Harris, author of the email client Pegasus Mail, has criticised OAuth 2.0 as "an absolute dog's breakfast", requiring developers to write custom modules specific to each service (Gmail, Microsoft Mail services, etc.), and to register specifically with them."
Almost everyone thinks of OAuth as the "three legged redirect" flow, where one site sends you to another, you say "Yes, I authorize" and it sends you back, and now that site can act as you on that other site.
But that's just the tip of the iceberg! That's called the "authorization code" grant, because implementation wise the one site gives a special one time authorization code to the other one, which it can then exchange server-to-server to get the actual sensitive access credential.
What about if the human isn't in the loop at that moment? Well you have the "client credentials" grant. Or what if it's limited input like a TV or something, well then you have the "device" grant.
And what if the client can't securely store a client secret, because it's a single page web app or a mobile application? Then it has to be a public client, and you can use PKCE or PAR for flow integrity.
What if you can't establish the clients up front? There's DCR, which is all important now with MCP. But then that either lets unauthenticated requests in to create resources in your data stores, or needs to be bootstrapped by some other form of authentication.
It's all just a sprawling behemoth of a framework, because it tries to do everything.
"At its core, OAuth for delegation is a standard way to do the following:
The first half exists to send, with consent, a multi-use secret to a known delegate.
The other half of OAuth details how the delegate can use that secret to make subsequent requests on behalf of the person that gave the consent"
This paragraph has a bunch of words that need defining (the word delegate does not appear on the page until then) and a confusing use of 'first half', 'second half' .... First half of what?
Surely it can be explained better than that?
Thanks, it did not.
OAuth and OpenID Connect are a denial of service attack on the brains of the humans who have to work with them.
I'm partial to this piece[1], which I helped write. It covers the various common modalities of OAuth/OIDC. (It's really hard to separate them, to be honest; they're often conflated.) Was discussed previously on HN[2].
0: https://news.ycombinator.com/item?id=47100073
1: https://fusionauth.io/articles/oauth/modern-guide-to-oauth
Author managed to simultaneously praise the question and avoid answering it at all.
I am seeing many fintech vendors move in this direction. The mutual clients want more granular control over access. Resource tokens are only valid for a few minutes in these new schemes. In most cases we're coming from a world where the same username and password was used to access things like bank cores for over a decade.
A classic explainer from almost a decade ago. This explains it from the point of view of the original problem it was designed to solve.
Spoiler alert: it is.
What I need is to understand why it is designed this way, and to see concrete examples of use cases that motivate the design
It's not "just another" explanation for how OAuth does, which was my immediate guess when reading the title.
However glad I opted to give it a chance, and likely especially illuminating for the younger crowd who didn't get to experience the joys of the early web 2.0 days.
OAuth is just a process framework for doing authentication and authorization such that a system doesn’t need to create those mechanisms themselves but can ask a server for it. More recently, in the form of a JWT token with those permissions encoded within.
It all boils down to how long your login token (some hash), or in the case of OAuth, your refresh token, can request an access token (timeboxed access to the system). “Tokens” in this case are just cryptographic signatures or hmac hashes of something. Preferably with a nonce or client_id as a salt.
Traditional login with username and password gives you a cookie (or session, or both) that you use to identify that login, same thing for refresh tokens. Only, refresh tokens are authentication, you need access so you request an access token from a resource server or api, now you have your time boxed access token that allows you to “Authentication: Bearer <token>” at your resource.
From a server perspective, your resource server could have just proxied the username and password auth to get the refresh token or (what I like to do) check the signature of the refresh token to verify that it came from me. If it did, I trust it, so long as the user isn’t in the ban table. If it’s good and they aren’t banned, issue them a time boxed access token for 24h.
If you fail to grasp the JWT aspect, I suggest you learn more about RSA/PKI/SHA and HMAC encryption libraries in your programming language of choice. Knowing how to bcrypt is one thing, knowing how to encrypt/sign/verify/decrypt is The Way.
(Sorry to the grey beards in the back and they know this already and probably wrote that RFC).
I've seen so many integrations use Oauth where it wasn't a good fit or where the spec was not followed. It always results in an abomination and insecure mess.
Maybe it's a know the rules before you can break them thing, but I've found designing custom auth integrations from UX first perspective result in amazing features. It's rare that both parties are willing to put the effort in it though. Usually people try to shoehorn the usecase into an existing oauth platform.
The main selling point of Oauth is to scale auth and authz to thousands of clients and use cases.