- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
- Government portals for publicly accessible information, like legal codes, weather reports, or property records
- Government portals for filing forms and other interactions
- Open data initiatives like Wikipedia and OpenStreetmap
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
It "was perceived as" a barrier because it is a barrier. It "felt easier" because it is easier. The by-the-book REST principles aren't a good cost-benefit tradeoff for common cases.
It is like saying that your microwave should just have one button that you press to display a menu of "set timer", "cook", "defrost", etc., and then one other button you use to select from the menu, and then when you choose one it shows another menu of what power level and then another for what time, etc. It's more cumbersome than just having some built-in buttons and learning what they do.
I actually own a device that works in that one-button way. It's an OBD engine code reader. It only has two buttons, basically "next" and "select" and everything is menus. Even for a use case that basically only has two operations ("read the codes" and "clear a code"), it is noticeably cumbersome.
Also, the fact that people still suggest it's indispensable to read Fielding's dissertation is the kind of thing that should give everyone pause. If the ideas are good there should be many alternative statements for general audiences or different perspectives. No one says that you don't truly understand physics unless you read Newton's Principia.
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
edit: typo
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
The world of programming, just like the real world, has a lot of misguided doctrines that looked really good on paper, but not on application.
For example:
"_links": {
....
"cancel": { "href": "/orders/123/cancel", "method": "POST" }
}
Why "POST"?And what POST do you send? A bare POST with no data, or with parameters in it's body?
What if you also want to GET the status of cancellation? Change the type of `method` to an array so you can `"method": ["POST", "GET"]`?
What if you want to cancel the cancellation? Do you do `POST /orders/123/cancel/cancel HTTP/...`, or `DELETE /orders/123/cancel HTTP/...`?
So, people adopt, making an originally very pure and "based" standard into something they can actually use. After all, all of those things are meant to be productive, rather than ideological.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
REST = Hell No
GQL = Hell No.
RPC with status codes = Grin and point.
I like to get stuff done.
Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
Why do this for API unless the API really really fits that style (rare).
GQL is expensive to parse and hides information from proxies (200 for everything)
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
If anyone wants to learn more about all of this, https://htmx.org/essays and their free https://hypermedia.systems book are wonderful.
You could also check out https://data-star.dev for an even better approach to this.
It's all HTTP API unless you're actually doing ReST in which case you're probably doing it wrong.
ReST and HATEOAS are great ideas until you actually stop and think about it, then you'll find that they only work as ideas in some idealized world that real HTTP clients do not exist in.
I think LLM's are going to be the biggest shift in terms of actually driving more truly ReSTful APIs, though LLM's are probably equally happy to take ReST-ish responses, they are able to effectively deal with arbitrary self describing payloads.
MCP at it's core seems to design around the fact that you've got an initial request to get the schema and then the payload, which works great for a lot of our not-quite-ReST API's but you could see over time just doing away with the extra ceremony and doing it all in one request and effectively moving back in the direction of true ReST.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
Also, who determined these rules are the definition of RESTful?
As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.
Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP
It is a fundamentally flawed concept that does not work in the real world. Full stop.
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
/draw_point?x=7&y=20&r=255&g=0&b=0
/get_point?x=7&y=20
/delete_point?x=7&y=20
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.This is also how HN does it:
/vote?id=44507373&how=up&auth=...
query ($name: String!) {
greeting(where: {name: $name}) {
response
}
}
or mutation ($input: CreatePostInput!) {
createPost(input: $input) {
id
createTime
title
content
tags {
id
slug
name
}
}
}
and so on, instead of having to manually glue together responses and relations.It's literally SQL over the wire without needed to write SQL.
The payload is JSON, the response is JSON. EZ.
API quality is often not relevant to the business after it passes the “mostly works” bar.
I’ll just use plain http or RPC when it’s not important and spend more time on things that make a difference.
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
If the only consumer is your own UI, you should use a much more integrated RPC style that helps you be fast. Forget about OpenAPI etc: Use a tool or library that makes it dead simple to provide data the UI needs.
If you have a consumer outside your organization: a RESTish API it is.
If your consumer is supposed to be generic and can "discover" your API, RESTful is the way to go.
But no one writes generic ones anymore. We already have the ultimate one: the browser.
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
On the other hand, agents could as well understand an OpenAPI document, as the description of each path/schema can be much more verbose than HATEOAS. There is a reason why OpenAPI-style API are favored: less verbosity of payload. If cost of agents is based on their consumption/production of tokens, verbosity matters.
[1] ok it's not an internet adage. I invented it and joke with friends about it
Likewise if the founders of the web took one look at a full on React based site they would shriek in horror at what's now the defacto standard.
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
I used to get caught up in what is REST and what is not, and that misses the point. It's similar to how Christopher Alexander's ideas pattern languages gets used in a way now that misses the point. Alexander was cited in introductory chapter of Fielding's dissertation. These are all very big ideas with broad applicability and great depth.
When combined with Promise Theory, this gives a dynamic view of systems.
To handle authentication "properly" you have to use cookies or sessions which inheritly make apps not RESTful.
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
The term has caused so much bikeshedding and unnecessary confusion.
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
I mean .. ok, you have the bookmark uri, aka the entrypoint
From there, you get links of stuff. The client still need to "know" their identifiers but anyway
But the params of the routes .. and I am not only speaking of their type, I am also speaking of their meaning .. how would that work ?
I think it cannot, so the client code must "know" them, again via out of band mecanisms.
And at this point, the whole stuff is useless and we just use openapi
It’s interesting that Stripe still even uses form-post on requests.
What the heck does this mean? Does it mean that my API isn’t REST if it can’t interpret “http://example.com/path/to/resource” in the same way it interprets “COM<example>::path.to.resource”? Is it saying my API should support HTTP, FTP, SMB, and ODBC all the same? What am I missing?
REST includes code-on-demand as part of the style, HTTP allows for that with the "Link" header and HTML via <script>.
Has any other system done this? where you send the whole application for each state with each state. project xandu?
I do find it funny how Fielding basically said "hey look at the web, isn't that a weird way to structure a program, lets talk about it." and every one sort of suffered a collective mental brain fart and replied "oh you mean http, got it"
But no, a service account in GCP has no less than ~4 identifiers. And the API endpoint I wanted to call needed to know which resource, so the question then is "which of the 4 identifiers do I feed it?" The right answer? None of them.
The "right" answer is that you need to manually build a string, a concatenate a bunch of static pieces with the project ID and the object's ID to form a more IDer ID. So now we need the project ID … and projects have two of those. So the right answer is that exactly 1 of the 8 different permutations works (if we don't count the constant string literals involved in the string building).
Just give me a URI, and then let me pass that URI, FFS.
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
I tend to use REST-like methods to select mode (POST, GET, DELETE, PATCH, etc.), but the data is usually a simple set of URL arguments (or associated data). I don't really get too bent out of shape about ensuring the data is an XML/JSON/Whatever match for the model structure. I'll often use it coming out, but not going in.
Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.
But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.
While I agree it's an interesting idea in theory, it's unnecessary in the real world and has a lot of downsides.
Django Rest Framework seems to do this by default. There seems very little reason not to include links over hardcoding URLs in clients. Imagine just being able to restructure your backend and clients just follow along. No complicated migrations etc. I suspect many people just live with crappy backends because it's too difficult to coordinate the rollout of a v2 API.
However, this doesn't cover everything. There's still a ton of "out of band" information shared between client and server. Maybe there's a way to embed Swagger-style docs directly into an API and truly decouple server and client, bit it would seem to take a lot more than just using links over IDs.
Still I think there's nothing to lose by using links over IDs. Just do it on your next API (or use something like DRF that does it for you).
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
Adding actions to it!
POST api/registration / api/signup? All of this sucks. Posting or putting on api/user? Also doesn‘t feel right.
POST to api/user:signup
Boom! Full REST for entities + actions with custom requests and responses for actions!
How do I make a restful filter call? GET request params are not enough…
You POST to api/user:search, boom!
(I prefer to use the description RESTful API, instead of REST API -everyone fails to implement pure REST anyways, and it‘s unnecessarily limited.)
for a lot of places, POST with JSON body is REST