aaronboodman.com
CEO rocicorp.dev. Building replicache.dev and zerosync.dev, raising two great kids, trying to be a better person. Also found at http://aaronboodman.com.
278 posts
1,764 followers
451 following
Regular Contributor
Active Commenter
comment in response to
post
And yes, the computation is incremental. Simple edits only do O(1) work.
comment in response to
post
It keeps some intermediate on-disk state, yes. You can read more about the general approach here: www.vldb.org/pvldb/vol16/.... We did not actually use DBSP but were inspired by it and do some similar techniques.
comment in response to
post
Zero is currently in alpha but it's maturing fast. We already have a few customers in production :).
There's some really cool stuff coming up feature-wise and from there we plan to go to beta over the summer.
Curious? Adventurous? Learn more at zerosync.dev.
comment in response to
post
Try it out yourself:
bugs.rocicorp.dev
We made our own Linear-style bug tracker with Zero as a dogfood, and have used it as our actual bug tracker for months.
comment in response to
post
And, because the data is local, you can *mutate* it. Zero provides a synchronous local write API. Write directly to the client-side data and the UI updates *instantly*. Changes are synced to the server in the background. Conflicts are resolved with www.gabrielgambetta.com/client-side-....
comment in response to
post
The end result is a really fun way to build web apps.
You do a query direct from the client. Zero answers the query and keeps it up to date efficiently.
But better, if you do another query that overlaps with the first, Zero reuses the already synced data to answer the new query ✨instantly✨.
comment in response to
post
IVM has been an area of active development in backend databases, most recently by projects like www.feldera.com and github.com/mit-pdos/noria.
We took these same ideas and applied them to sync.
comment in response to
post
But doing this in a classic database would be insanely expensive. It would basically mean re-running multi-MB complex joins over and over, anytime ~any row changes.
Worse, since every user has their own permissions, when a single row changes ALL users' queries must be recalculated.
comment in response to
post
For a sync engine, we really need IVM because of permissions. We want to sync MB of data to the client, but only what the user has access to. These permissions naturally take the form a set of complex queries.
What we really want is to sync the result of these big queries continually to client.
comment in response to
post
Zero is a ✨generalized✨ sync engine that should work for a wide variety of apps, and is enabled by IVM.
In traditional dbs, you do a single query and get a single result. If data changes, you re-run the query. Even in "realtime" databases, what's usually happening is re-running queries.
comment in response to
post
This sounds complete bonkers to me. Array is going to be way better. Unless you need to splice into it often in which case Array is still probably better due to native impl and data locality effects.
comment in response to
post
It uses server reconciliation. This is a CRDT-like technique that has some nice advantages when you can rely on a central server. It is explained on this blog post from our previous project Reflect: rocicorp.dev/blog/ready-p...
comment in response to
post
The canary script build zero from trunk in a clean directory and publishes our npm package and docker image to these “canary” tags. This allows us to get fixes out to users fast without a full release and also gets us early testing of new features before release.
comment in response to
post
Yep, npm has a feature where you can give builds tags. The “latest” tag is the default and what people get when they install without any version specifier. But you can create other labels. We have a “canary” label we push to between releases which people can install like “@rocicorp/zero@canary”
comment in response to
post
Well, that and a Greg :). Both required to really make this work well.
comment in response to
post
Featuring: Hakan Shehu, Colanode, Michiel de Jong, Unhost, @aaronboodman.com, Zero Sync, Carl Assmann, @anselm.io, @jazz.tools, JSnation, @alexgarcia.xyz, @restatedev.bsky.social, @tom-delalande.bsky.social, Audrey Sitnik, @fosdem.bsky.social, TriplitDB, playbit, @jacobbolda.com...
comment in response to
post
Long version here: www.youtube.com/watch?v=rqOU...
Short version is existing sync engines really want to download all or most data to client ahead of time. This not realistic for majority of apps. There are attempts at partial sync but they are too hard to use.
comment in response to
post
We made it. Work on Zero continues.
comment in response to
post
I think these effects would hsce happened eventually in Java and c++ too, but the rate of change in these ecosystems is so much lower it just happened first in js land.
comment in response to
post
the open source ecosystem is far more developed and active for js creating an environment where evolution and competition happen very quickly on many dimensions.
Type design quickly became a way libraries could differentiate.
comment in response to
post
I think because the scale of the JavaScript market is many times that of c++ or Java, making this essentially the first time these kind of type systems are mass market.
Also…
comment in response to
post
Don’t be ridiculous! Molokai 2026?🙃
comment in response to
post
Microsoft of course didn’t invent these kind of type systems, but TS is the first mass market deployment of them so the first time these design aspects have really developed.
comment in response to
post
Types are like UI for programmers. They are how developers interact with your api. And just like ui design they have a huge impact on how devs perceive your product, independent of its reality. Just like ui design a whole specialty of typescript programming has sprung up to optimize this experience.
comment in response to
post
When paired with React, sync engines bring back a lot of the intense rush of productivity from its first days.
In my talk we'll explore a number of emerging sync engines, find the common threads, and compare the differences.
Hope to see you there!
comment in response to
post
Sync engines solve this. They are a dramatically different way to build web apps, that abstract away network communication and all the complexity that goes with it – caching, invalidation, asynchrony, error states, optimistic results, rollbacks, etc.
comment in response to
post
Over time, React and the ecosystem around it have accumulated complexity. It was for good reason – in pursuit of performance.
But this complexity ate away at the core value prop: delivering more fun per second than alternatives.
comment in response to
post
Right now it’s *only* configurable. The working set is exactly the queries you display in UI. You can also preload() queries - those go to idb and not memory.
In the future we imagine being smarter and keeping recently used queries around for a bit in case needed later. But we don’t do that now.
comment in response to
post
It doesn’t. Zero implemented own storage layer that runs mostly in memory and it treats idb as dumb block storage.
comment in response to
post
Thanks for the feedback!
comment in response to
post
Top-level await has been fixed in 0.10. Possible we forgot to close bug :(.
A setup guide for adding zero to existing js project is top priority for new year.