Pros: The samples play well, look fun, and it looks like a great solution to small physics demos in JS. Great job!
Cons: Playing with the Stress demos (especially #2) really shows the slowdowns and downsides to using a JS based engine. At 450 boxes (+ 4 bounding walls), it runs at 20fps on my Macbook and thats with a lot of jitter and overlap. Not as fun.
Edit: Significantly better w/ WebGL render, @32fps, and with sleeping+WebGL render @34fps.
I wish I could see a physics engine using WebGL and GPU acceleration for the web. This one is a great start if the author were to take it further and implement some of the logic with shaders.
> Cons: Playing with the Stress demos (especially #2) really shows the slowdowns and downsides to using a JS based engine. At 450 boxes (+ 4 bounding walls), it runs at 20fps on my Macbook and thats with a lot of jitter and overlap. Not as fun.
Have you tried enabling sleeping? Even most AAA games don't actually have even remotely close to that number of bodies active in a game at once. Sure, the engines can handle it, but having a stack of bodies like the stress demos is the absolute worst case, as you normally have to resolve a chain of contacts, in this case a few hundred, iteratively. Most physics engines also benefit hugely from multithreading, something which JS doesn't provide.
> I wish I could see a physics engine using WebGL and GPU acceleration for the web. This one is a great start if the author were to take it further and implement some of the logic with shaders.
That was tried on PC and consoles and most game developers are going back to using the CPU for physics calculation, and using the GPU for rendering. Using WebGL with this particular demo shouldn't increase the performance if the bottleneck is the physics simulation.
GPU acceleration isn't going to be practical until >WebGL2, when we get access to compute shaders.
You can technically do compute now by expressing your data as textures and performing rendering passes over it, but trying to do anything non-trivial that way will quickly lead to madness.
I mean, this is what I'm looking for. Sure it would be madness, but if you have 450 bodies represented by, say, 10 floats each (random number), that comes to about 4500 pixels of values- something a GPU is fine at.
in 3d you normally represent by a 4x4 matrix, which can be easily reduced to 6 floats (x y z translation, and x y z rotations). The problem with doing that little work is you'll very quickly be bottlenecked by copying memory to/from the GPU. If you want to do any raycasts for instance, or query for occlusion/bounds testing, you need to upload data to the GPU, run a (potentialy very slow, serial) query, then copy the result back. The results normally aren't worth it.
The question then becomes how slow is copying to/from the GPU versus doing the computation on the CPU when it comes to 100s/1000s of active bodies.
I agree that normally (small examples) it wouldn't be worth it, but for any large simulations, and say, video games of the future, I have yet to see someone attempt to do this. Perhaps it truly isn't beneficial enough... but who knows.
> The question then becomes how slow is copying to/from the GPU versus doing the computation on the CPU when it comes to 100s/1000s of active bodies.
The amount of time spent inside a GPGPU kernel for updating 1000 rigid bodies wouldn't even be as much as the length of time to copy the data back and forth. You've also got to consider the acceleration structure used for the collision detection. If you have a hierarchical tree-like structure (BVH, BSP tree) then how do you update it in parallel. you need to spin off thousands of tasks for it to be worth running on the GPU. If you have that many dynamic bodies, you're probably going to be draw call limited in trying to render them, unless they're exceptionally simple objects (Particles for example, which are already GPU accelerated in modern Game Engines).
> I agree that normally (small examples) it wouldn't be worth it, but for any large simulations, and say, video games of the future, I have yet to see someone attempt to do this. Perhaps it truly isn't beneficial enough... but who knows.
In modern AAA video games, the active body count in a scene would be no greater than the hundreds. Think of a scene from Assassin's Creed, or Battlefield, and think about how many truly dynamic things there are in the level. Chances are, there's you(the player), a handful of other players, a handful of explosive barrels, maybe 5-10 vehicles, and a few extras for bodies. For something like Assasin's creed, the computation is most likely in the animation, where hundreds of physics raycasts from the players hands to the various "climbable" points are performed, and the updating of the very detailed skeletal mesh. Modern games utilise almost 100% of the GPU time already on rendering for lighting, AA, shadows, ambient occlusion, transparency, reflections. To add more stress to the GPU is unnecessary really, considering that the computations are so short for a few hundred bodies that the bottleneck will be copying the data over and back.
I wrote my masters thesis on GPGPU accelerated Bounding Volume Hierarchies (A common structure used in Raytracing and in Collision Detection). The overhead of a small copy stalling the GPU is quite severe. It's worth it if you can do all your work on the GPU without having to copy back, but that's currently not feasible for interactive simulations.
Anything that uses NVidia Physx will have GPU physics in some sense, but very few (if any) of those actually use GPU acceleration for their rigid body simulations
Pixi is the rendering engine used by Phaser. More relevant is the different physics engine Phaser supports (Box2D, P2 and its own lightweight arcade physics engine). Also Phaser is Typescript but distributed as JS. You can link it directly with Typescript too though which is why I like it a lot.
Phaser.js was at some point in the past written in TypeScript, the current version is implemented in JS and you can use it from TS if you wish. (Also, they will probably ditch the Pixi renderer in the next Phaser version and use their own.)
Cons: Playing with the Stress demos (especially #2) really shows the slowdowns and downsides to using a JS based engine. At 450 boxes (+ 4 bounding walls), it runs at 20fps on my Macbook and thats with a lot of jitter and overlap. Not as fun.
Edit: Significantly better w/ WebGL render, @32fps, and with sleeping+WebGL render @34fps.
I wish I could see a physics engine using WebGL and GPU acceleration for the web. This one is a great start if the author were to take it further and implement some of the logic with shaders.