I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library
https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally
it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.
Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...
Vercel fixes this for a fee: https://vercel.com/docs/skew-protection
I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.
Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).
JSX is a descendant of a PHP extention called XHP [1] [2]
[1] https://legacy.reactjs.org/blog/2016/09/28/our-first-50000-s...
One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.
Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.
Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.
If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.
If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.
And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.
But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!
In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.
The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.
My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?
I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".
So, let's assume the alternative universe, where we did not mess up and got REST wrong.
There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.
What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?
It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.
The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.
It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:
1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?
(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))
Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.
Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.
[0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>
I don’t see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don’t do SSR already).
One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that’s only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that’s a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.
My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.
https://overreacted.io/react-for-two-computers/ https://news.ycombinator.com/item?id=43631004 (66 points, 6 days ago, 54 comments)
(a bit sad to see all the commenters that clearly haven't read the article though)
The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.
The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.
With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL
Anyway, it's hard to deny that React dev nowadays is an ugly mess. Have you given any thought to what a next-gen framework might look like (I'm sure you have)?
When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?
On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?
Feels like HTMX, feels like we've come full circle.
If you're fetching 10s of raw models (corresponding to a table) and extract (or even join!) the data needed to display in the view, it's clearly not the best engineering decision. But fetching 2 or 3 well shaped views in your component and doing the last bit of correlation to the view in the component is acceptable.
Same for deciding a render strategy: Traditional SSR (maybe with HTMX) vs. isomorphic (Next and friends) vs. SPA. Same for Redux vs MobX. Or what I think is often neglected by the frontend folks: Running Node on the backend vs. Java vs. Go vs. C# vs. Rust.
If you're already in the spot where React Server Components are a good fit, the ideas in the article are compelling. But IMO not enough to be convincing to switch to or chose React / Next when you're better of with traditional SSR or SPA, which IME are the best fits for the vast majority of apps.
One thing I would like to see more focus on in React is returning components from server functions. Right now, using server functions for data fetching is discouraged, but I think it has some compelling use cases. It is especially useful when you have components that need to fetch data dynamically, but you don't want the fetch / data tied to the URL, as it would be with a typical server component. For example, when fetching suggestions for a typeahead text input.
(Self-promotion) I prototyped an API for consuming such components in an idiomatic way: https://github.com/jonathanhefner/next-remote-components. You can see a demo: https://next-remote-components.vercel.app/.
To prove the idea is viable beyond Next.js, I also ported it to the Waku framework (https://github.com/jonathanhefner/twofold-remote-components) and the Twofold framework (https://github.com/jonathanhefner/twofold-remote-components).
I would love to see something like it integrated into React proper.
ts-liveview is a TypeScript framework I built (grab it as a starter project on GitHub[1]) for real-time, server-rendered apps. It uses JSX/TSX to render HTML server-side and, in WebSocket mode, updates the DOM by targeting specific CSS selectors (document.querySelector) over WebSockets or HTTP/2 streaming. This keeps client-side JavaScript light, delivering fast, SEO-friendly pages and reactive UIs, much like Dan’s “JSX over the wire” vision.
What’s your take on this server-driven approach? Could it shake up how we build apps compared to heavy client-side frameworks? Curious if you’ve tried ts-liveview yet—it’s been a fun project to dig into these ideas!
I am wondering: What are the gains of RSC over a Fat Resource (with expand, sort, select and filter) where responses for (expand,sort,select) are cached? Most applications are READ-heavy, so even a fat response is easily returned to the client and might not need a refetch that often.
The article briefly mentions that you need $expand and $select then, but why/when is that not a valid approach?
The other point I have is that I really do not like to have JS on my server. If my business logic runs on a better runtime, we have 3 (actually 4) layers to pass:
Storage layer (DB)
-> business logic in C# (server)
-> ViewModel layer in TS/JS (server)
-> React in TS/JS (client).
Managing changes gets really complex, with each layer needing type safety.I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.
Maybe OP could clear up - Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work? - prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here? - "It’s easy to make HTML out of JSON, but not the inverse". What is intrinsic about HTML/XML?
--
Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.
I don't see the issue with adding an endpoint per viewmodel. Treating viewmodels as resources seems perfectly fine. Then again, I'm already on the HATEOAS and HTMX bandwagon, so maybe that just seems obvious, as it's no worse than returning HTML or JSX that could be constantly changing. If you actually need stable API endpoints for others to consume for other purposes, that's a separate consideration. This seems to be the direction the rest of the article goes.
What if we just talked about it only in terms of simple data structures and function composition?
[1] https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...
What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.
And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.
When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?
I still can't get over how the "API" in "REST API" apparently originally meant "a website".
It’s a very long post so maybe I missed it, but does Dan ever address morphdom and its descendants? I feel like that’s a very relevant point in the design space explored in the article.
In most cases that means rendering HTML on the server, where most of the data lives, and using a handful of small components in the frontend for state that never goes to the backend.
1: APIs should return JSON because endpoints do often get reused throughout an application.
2: it really is super easy to get the JSON into client side HTML with JSX
3: APIs should not return everything needed for a component, APIs should return one thing only. Makes back and front end more simple and flexible and honestly who cares about the extra network requests
It's exciting to see server side rendering come back around.
The biggest draw that pulled me to Astro early on was the fact that it uses JSX for a, in my opinion, better server side templating system.
const post = await getPost(postId);
But...we should basically never be doing this. This is totally inefficient. Suppose this is making a network call to your Postgres database to get the post data. It will make the network call N number of times. You are right back at the N+1 query problem.Of course if you're using SQLite on a local disk then you're good. If you have some data loader middleware that batches and combines all these requests then you're good. But if you're just naively making these requests directly...then you're setting up your app for massive performance problems in the near future.
The known solution to the N+1 query problem is to bulk load all the data you need. So you need to render a list of posts, you bulk load all their data with a single query. Now you can just pass the data in directly to the rendering components. They don't load their own data. And the need for RSC is gone.
I'm sure RSC is good for some narrow set of cases where the data loading efficiency problems are already taken care of, but that's definitely not most cases.
A BFF is indeed a possible solution and yeah if you have a BFF made in JS for your React app the natural conclusion is that you might as well start returning JSX.
But. BUT. "if you have a BFF made in JS" and "if your BFF is for your React app" are huge, huge ifs. Running another layer on the server just to solve this specific problem for your React app might work but it's a huge tradeoff and a non starter (or at least a very hard sale) for many teams; and this tradeoff is not stated, acknowledged, explored in any way in this writing (or in most writing pushing RSCs, in my experience).
And a minor point but worth mentioning nonetheless, writing stuff like "Directly calling REST APIs from the client layer ignores the realities of how user interfaces evolve" sounds like the author thinks people using REST APIs are naive simpletons who are so unskilled they are missing a fundamental point of software development. People directly calling REST APIs are not cavemen, they know about the reality of evolving UI, they just chose a different approach to the problem.
Spa developers missed the point totally by reinventing broken abstractions in their frameworks. The mising points is in code over convention. Stop enforcing your own broken convention and let developers use their own abstractions. Things are interpreted at runtime, not compile time. Bundler is for bundling, do not cross its boundary.
Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.
Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.
Whee!
#!/usr/bin/perl
$ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/;
if ($ENV{'REQUEST_METHOD'} eq "GET") {
$buffer = $ENV{'QUERY_STRING'};
}
print "Content-type: text/html\n\n";
$post_id = $buffer;
$post_id =~ s/&.*//; # Get first parameter (before any &)
$post_id =~ s/[^a-zA-Z0-9\._-]//g; # Sanitize input
$truncate = ($buffer =~ /truncateContent=true/) ? 1 : 0;
$title = `mysql -u admin -p'password' -D blog --skip-column-names -e "SELECT title FROM posts WHERE url='$post_id'"`;
chomp($title);
$content = `mysql -u admin -p'password' -D blog --skip-column-names -e "SELECT content FROM posts WHERE url='$post_id'"`;
chomp($content);
if ($truncate) {
# Extract first paragraph (everything before the first blank line)
$first_paragraph = $content;
$first_paragraph =~ s/\n\n.*//s;
print "<h3><a href=\"/$post_id.html\">$title</a></h3>\n";
print "<p>$first_paragraph [...]</p>\n";
} else {
print "<h1>$title</h1>\n";
print "<p>\n";
print "$content\n";
print "</p>\n";
}