4000 certainly did, the "shader execution reordering" gave an meaningful uplift to tasks that "underutilized warp units due to scattered useful pixels".
its a very honorable mention in my eyes because its more appropriate of the tile of "first independent Graphics unit" than the Geforce 2. (did more than just blast already projected triangles at the screen)
not that it was an awesome product, but certainly it was flexible.
a good (albeit tiny) demo of that is that vquake has the same wobbling water distortion of the software renderer quake but rendered entirely through the gpu. Perhaps with some interpretation this could be called the "caveman discovered fire" of the pixel shading era.
intel support has been mild to non existent in the VR space unfortunately. Given the very finicky latency + engine support i wouldn’t bet on a great experience, but hope for the best for more competition in this market. (even amd has a lot of caveats comparing to nvidia)
Footnotes:
* critical "as low as it can be" low latency support on intel XE is still not as mature as nvidia, amd was lagging behind until recently.
* Not sure about "multiprojection" rendering support on intel, lack of support can kill vr performance or make it incompatible. (the optimized vr games often rely on it)
It looked like when Intel jumped into this space, they tried to do everything at once. It didnt work well, they were playing catch up to some very mature systems. They are now being much more selective and restrained. The down side is that things like VR support are put on the back burner for years.
Good for most people but if you need that fuctiobality and they dont have it, go somewhere else.
i had brainstormed a bit a similar problem (non world aligned voxels "dynamic debris" in a destructible environment. One of the ideas that came through was to have a physics solver like the physX Flex sdk.
https://developer.nvidia.com/flex
* 12 years old, but still runs in modern gpus and is quite interesting on itself as a demo
* If you run it, consider turning on the "debug view", it will show the colision primitives intead of the shapes.
General purpose physics engine solvers arent that much gpu friendly, but if the only physical primitive shape being simulated are spheres (cubes are made of a few small spheres, everything is a bunch of spheres) the efficiency of the simulation improves quite a bit. (no need for conditional treatment of collisions like sphere+cube, cube+cylinder, cylinder+sphere and so on)
wondered if it could be solved by having a single sphere per voxel, considering only the voxels at the surface of the physically simulated object.
from what i seen in "low end" ssds like the "120gb sata sandisk ones" under windows in heavy near constant pagging loads is that they exceed by quite a lot their manufacturer lifetime TBW before actually actually started producing actual filesystem errors.
I can see this could be a weaker spot in the durability of this device, but certainly it still could take a few years of abuse before anything breaks.
an outdated study (2015) but inline with the "low end ssds" i mentioned.
it seems to have helped path tracing by a lot.
reply