I know some people coded up raytracers with glsl texture shaders, but rayracers are massive parrallel with no shared state needed algorithms (the entire scene is tiny in those examples). Would this new API enable shared state (to some extent)? For example, someone may want to solve numerically the diff equation of the liquid flow. This usually involves dealing with big lattices where parts of the lattices can be updated in parallel, but on the edges, there need to be some data sharing involved. Same question, but for multiplying big matrices, as many of those numerical methods allow to reduce the task to multiplying huge matrices.
We are exposing writable storage buffers and textures in all shader stages (there is a caveat discussion [1] on the topic though), which means you can write data out directly. Sharing data between shader threads can be efficiently in the compute shaders (via the shared group memory), which are included in the core of WebGPU.
P.S. vange-rs [2] is written on wgpu-rs and uses ray-tracing in fragment shaders for the terrain. I hope to run it in the browser one day!
Yes, we are fully aware of that. There must be a line drawn somewhere between work that we can make portable, and the work that is platform-dependent. Enforcing any ordering on storage writes is completely unfeasible.
Shameless plug of my own WebGL based ray/path tracer: https://github.com/apbodnar/FSPT Compute shaders will allow for some performance wins over my current fragment shader version by being able to do a ray binning/sorting pass to shoot rays coherently allowing for better cache behavior and shared shader state.