This helped me paint a better mental picture, although it still makes more sense to me to maintain an awareness of the separation of the server and client
Exposing API routes in frameworks like SvelteKit or Next feels closely-knit enough + nowadays there are a lot of ways to generate types for API's
I know this post is a month old but ... how does versioning work with 'use server'?
Suppose my client imports and calls an exported server function named `likePost`. I deploy. Then I rename `likePost` to `likeSkeet` and redeploy. Do old versions of the clients just break?
similar to REST endpoints — different possible strategies!
some options (a deployment concern):
- yolo, whatever happens
- server can fail early when asked for a non-existent version
- keep old deployments of the server alive for a few days (aside from security fixes etc), direct to correct deploy
Hmm, I guess my question is less about versioning per se and more about URLs, i.e., is the URL for each endpoint tied to a constant such as module name and export name? Or is each deployed client bundle tightly coupled to a specific server version and the callback URLs are unique for each version?
The scenario I'm thinking about is something like "we need to push the latest version of the server code ASAP because security, but we don't want to unnecessarily force clients to reload because the vuln is purely server-side and if clients just keep hitting the same URL, that is be fine"
i think the actual endpoint url scheme may be up to the framework (or an app's low-level integration), i'm not actually sure. i don’t think react defines it, it gets filled in in a layer above
I like the idea of RPCs and scripts being handled by the module system. I think it would feel less magical if the syntax was tied to import/export instead of directives. Something like:
import { LikeButton } from './frontend' with { type: 'script' };
export remote async function likePost(postId)
yeah that’s a good first step! in practice though you’ll find that you really want the module itself to “decide”. because usually you make “the cut” based on local information (eg needs state => use client). so you want to push that decision deeper into the graph as an implementation detail.
Agree that the decision to expose a function remotely is an export-time decision. But isn't the client component decision made at import-time? If you import from a client component, you want a function. If you import from a server component, you want a string (or w/e serialized form).
ah, so this is already taken care of because the import semantics depend on which world you’re in. if you’re importing from the client world, "use client" is already a noop.
Yep zod is great but I'm more asking from the perspective of integrating the tooling (a la the meat of your post). Just like the compiler can tell you if you are passing in a non-serializable value to a client component, I wonder if it could tell you if the argument is untrusted.
with RSC, execution conceptually starts flowing *from* the server, so you get all the server stuff in one roundtrip. with TanStacks you're gonna have as many roundtrips as there are server functions
Loved the article but I'm still not convinced this is the way to move forward. IME you want to keep the two worlds separate and the boundary between them clear.
The two environment have wildly different contexts: the server is a trusted source (within the server I can make assumptions about a function being called only after another), the client is an untrusted environment (an API might be called via that program or outside of it and in unpredictable ways).
The connection between them should also not be transparent as oftentimes you don't just want to call a function from the client to the server. You also want to cache it, throttle it, etc. because calling a "backend function" is a costly operation.
And I'm not even touching on the topic of "what happens when the client actually has a cached version that is different from the server" which, tbf, most frameworks and RPC solutions are also ignoring.
The two worlds inherently do not share much between them (one deals with databases, data, authentication, S3 buckets, background jobs, etc.) the other deals with presentation.
So the shareability aspect is lost.
I understand the advantages of this RPC method and it's type safety and it's alluring.
But IMHO the way forward is something like Tanstack Start server functions. They give you all those benefits without requiring all this magic (especially coupled with isomorphic loaders).
I think the biggest gap is the lack of a server side middleware for actions. I’m currently rolling my own, but it requires a lot of documentation and is a steeper learning curve for other devs on the team than if react had something built in that would allow injecting context via middleware.
I wouldn’t write an API without a framework that helps with handling session/db/etc. context management and I don’t usually call fetch without a client that handles retry logic and standard errors.
Yeah that’s one of the downsides of not having HTTP verbs with standardized idempotency. I’m currently handling this by sending results over the wire and having shared functions which handle appropriate error UX and could do things like retried.
I think the directives are a great evolution from tRPC and use client feels very intuitive. The only thing I struggle with is the lack of middleware on both the client and server side of the action.
It's interesting to consider other types of "doors" between environments too. For example, new Worker(), navigator.serviceWorker.register(), CSS.paintWorklet.addModule(), etc. In fact, Parcel models all of these the same way as "use client" and "use server" internally!
Thanks for the post, I do understand now how "use client" and "use server" annotations make sense within a single project.
But how does this scale for 3rd party modules? Is it feasible to require the whole React ecosystem to mark their entire sources with the new directive?
configuration doesn’t compose. if you don’t want to do it in that package, what you do is instead mark *your* modules that import that package. you have full freedom to decide where to “make the cut”. eg if some library is only ever imported from the client world, it doesn’t need directives inside.
so for example if your client components import jQuery, it doesn’t matter. jQuery doesn’t need “use client” because it is already only imported from the client world. you don’t need a door when you’re already behind that door.
Something new (to me) jumped out when reading this one. I was thinking about how @unison-lang.org makes every function able to run somewhere else, in their system there is less of a client/server split and more of a graph of nodes.
"use client/server" has a directionality by design (to ensure no waterfalls) but what would this look like if any node could run code on any other node?
Is there something about server/client that makes sense for UI? What happens if we think of those two nodes as equal and the communication entirely reversible?
If something like that could work I think it would like the "frontend + sync engine" paradigm, but I'm not sure exactly how it'd look
Thanks for this great write-up. Before reading this, these directives seemed to me like bundler magic. Now it‘s still a bundler feature, but one with a clear concept. I immediately understood what ”use dom“ is doing in RN / Expo context and how Expo devs came up with the idea.
Comments
```
if (this.classList.contains('liked')) {
const { likes } = await likePost(postId); // This should call `unlikePost` lol
} else {
const { likes } = await unlikePost(postId); // and this `likePost`
}```
Exposing API routes in frameworks like SvelteKit or Next feels closely-knit enough + nowadays there are a lot of ways to generate types for API's
Suppose my client imports and calls an exported server function named `likePost`. I deploy. Then I rename `likePost` to `likeSkeet` and redeploy. Do old versions of the clients just break?
some options (a deployment concern):
- yolo, whatever happens
- server can fail early when asked for a non-existent version
- keep old deployments of the server alive for a few days (aside from security fixes etc), direct to correct deploy
import { LikeButton } from './frontend' with { type: 'script' };
export remote async function likePost(postId)
If I understand correctly, typing args – e.g.
"use server"
export async function likePost(postId: string) { ... }
is awesome for ensuring correct usage in the codebase, but it doesn't stop someone from
export async function likePost(unsafePostId: string) {
if (typeof unsafePostId !== 'string') throw;
const postId = unsafePostId;
// ...
}
so I get type safety for the caller but
In Tanstack Start this also works in a similar fashion: https://tanstack.com/start/latest/docs/framework/react/server-functions#using-a-validation-library
Noticed that I think "The Doors" from the opposite direction. Not like "use client" exports client functionality to server. (Client -> Server)
Instead, when I'm on the server and need client features then I "use client".
(Server <- Client)
I wonder what you think is the benefit of using directives/imports instead a function call like in tanstack starts createServerFn
So the shareability aspect is lost.
But IMHO the way forward is something like Tanstack Start server functions. They give you all those benefits without requiring all this magic (especially coupled with isomorphic loaders).
I guess time will tell.
It's interesting to consider other types of "doors" between environments too. For example, new Worker(), navigator.serviceWorker.register(), CSS.paintWorklet.addModule(), etc. In fact, Parcel models all of these the same way as "use client" and "use server" internally!
But how does this scale for 3rd party modules? Is it feasible to require the whole React ecosystem to mark their entire sources with the new directive?
clientModules: ['my-ui-lib', 'some-date-picker']
If something like that could work I think it would like the "frontend + sync engine" paradigm, but I'm not sure exactly how it'd look