martin-laxenaire.fr
Freelance front-end developer. Javascript, WebGL, WebGPU and stuff. http://martin-laxenaire.fr
Author of http://curtainsjs.com and https://martinlaxenaire.github.io/gpu-curtains/ - also an @okaydev.co
89 posts
599 followers
129 following
Regular Contributor
Active Commenter
comment in response to
post
Damn, that sucks.
comment in response to
post
Also, it's working out of the box for more complex scenarios like PBR materials or shadows, and we can even use render bundles all the way! 💥
Here's a test scene for you to play with: martinlaxenaire.github.io/gpu-curtains...
comment in response to
post
By render passes I mean 'beginRenderPass' calls :)
Anyway I got you now, thanks for the explanation (and thanks to @gnikoloff.bsky.social as well).
Using render bundles for shadow maps has been on my to-do list for a while. I definitely need to try that.
comment in response to
post
Hmm, ok, but since multi view is not yet supported, that means you're creating 16 render passes (one for each depth layer) and executing your bundle 16 times, right?
comment in response to
post
But I don't get how you can do it in a single pass. I still need 6 render passes to render point shadows to the cube depth textures for example. 🤔
comment in response to
post
So you're rendering a big shadow map splitted in a depth textures array of depth 16?
Then the idea for CSM would be to split it into like 4 depth textures arrays of decreasing width and height, based on the distance to the camera?
comment in response to
post
New in Fragment: sketch asynchronous initialisation 👯
init() now supports async functions! While load() can still be used as before, this simplifies the share of resources between the two functions.
Also getting ready for p5.js 2.0 and WebGPU requestAdapter() 👀
comment in response to
post
Yes!! 🔥
comment in response to
post
Wow. I might as well quit webgpu and start learning CSS again.
(Jokes asides it's impressive to see how fast CSS is evolving nowadays).
comment in response to
post
The examples are delibarately focusing on using the different features rather than, like, "beeing beautiful". So I don't know really. 😅
Whatever you'll chose I'm eager to see how it'll turn out. 🙏
comment in response to
post
It would depend on what you'd like to highlight in the first place? I guess it's about mixing DOM and 3D/WebGPU?
The homepage martinlaxenaire.github.io/gpu-curtains/ or martinlaxenaire.github.io/gpu-curtains... or martinlaxenaire.github.io/gpu-curtains... maybe?
comment in response to
post
Oh, my bad! 😅
But this is super cool too! 💥
gpu-curtains DOMObject3D/DOMMesh are not taking CSS transforms into account tho, but it's a pretty nice idea.
comment in response to
post
Oh, a web component based shadertoy, neat! 🚀
As for gpu-curtains, most of the DOM synced happens here (see 'updateModelMatrix', 'documentToWorldSpace' and 'computeWorldSizes'): github.com/martinlaxena...
It's basically DOM to world space computations.
comment in response to
post
Hey Adam, sure, of course!
Let me know if you need anything.
comment in response to
post
Well, there it is: github.com/martinlaxena...
comment in response to
post
Wondering whether I should create a GitHub repo for the color palette generator. Would any of you be interested? 🤔
comment in response to
post
That's it!
I hope you liked this little technical breakdown. Let me know if you have any questions.
Also, I'm available for freelance work so hit me up if you've got any #webgl or front-end needs!
museedelaplaisance.com
9/9
comment in response to
post
Based on the current quality, we can enable or disable features, increase the DPR and render targets resolutions, camera far value and so on.
8/9
comment in response to
post
4. Performance
To get the best performances across all devices, I had once again to come up with creative ideas.
I wrote a helper that constantly watches the FPS over a given period of time and use it to set the quality of the scene, from 1 (poor) to 10 (excellent).
7/9
comment in response to
post
The idea is to render the whole scene except the particles in a render target, then draw the whole scene again. In the particles fragment shader we then use the RT depth and opacity to adjust the particles alpha, as seen in this codepen: codepen.io/martinlaxena...
6/9
comment in response to
post
3. Transparency
Almost all the meshes drawn are transparent. This can easily lead to depth related issues, with objects being drawn on top of each other when they shouldn't, especially with particles. At that time I didn't know about OIT, so I came up with my own solution.
5/9
comment in response to
post
2. Instancing
The particles and waves representing the sea are both instanced using a tile system. As the camera moves, the tiles move accordingly, allowing to draw only whats currently visible in the camera frustum.
4/9
comment in response to
post
For a really soothing ambience, we've added a day/night cycle that subtly affects the color scheme and post processing effects like lens flare.
3/9
comment in response to
post
1. Concept
The idea is that the user navigates a quiet sea and sails through the history of recreational boating. Hovering markers along the path invites her/him to learn more about said history. The navigation path is randomly generated each time for a unique experience.
2/9
comment in response to
post
This means the library has evolved a lot since I wrote the @okaydev.co tutorials, including multiple breaking changes.
I've wrote some up-to-date @codepen.io showing you how to implement the examples with the latest lib version.
Part 3: codepen.io/martinlaxena...
Part 4: codepen.io/martinlaxena...
comment in response to
post
Thanks Ben! 🤗
comment in response to
post
🙌
comment in response to
post
Look at how easy it'd be to create those nice soap bubbles: