I pondered on the same subject recently as I was implementing the same algorithm (i.e. Mandelbrot set) on the CPU (scalar vs SIMD) and GPU compute using fixed-point and floating-point for comparisons (if interested: https://tayfunkayhan.wordpress.com/2020/06/03/mandelbrot-in-...).
It bothers me how little progress has been done on "shading" languages front compared to overall many-core computation models and capabilities over the years. And, that is despite the fact that shaders are very often where the most time is spent in modern workloads.
Compute with Vulkan is another story. It offers some nice abstractions, but it shows that it's mostly intended for async-compute/work-offloading for rendering, IMO. Too much fruction.
It bothers me how little progress has been done on "shading" languages front compared to overall many-core computation models and capabilities over the years. And, that is despite the fact that shaders are very often where the most time is spent in modern workloads.
Compute with Vulkan is another story. It offers some nice abstractions, but it shows that it's mostly intended for async-compute/work-offloading for rendering, IMO. Too much fruction.