> [edit: I was being overly poetic here, as several people have missed the intention. As a systems optimization person, I care deeply about efficiency. When you work hard at optimization for most of your life, seeing something that is grossly inefficient hurts your soul. I was likening observing our organization's performance to seeing a tragically low number on a profiling tool.]
as someone who spends a lot of time in a profiler, this resonates with me. the irony that i work in JS/TS is not lost on me, but most React apps make me sad, most node_modules make me sad. V8/JSC/SpiderMonkey are amazing JITs, and seeing them get bogged down by inefficient JS is painful. i see many devs jumping to Web Workers or even WASM when in fact their existing JS and algos can be orders of magnitude faster with just a tiny bit of forethought.
> it is the more personal pain of seeing a 5% GPU utilization number in production
I think the 5% was a metaphor for how the org is barely able to utilize its resources.
Also wouldn't React JS/TS be an instance where you are fully utilizing resources? Not from a raw machine performance standpoint, but from a developer efficiency standpoint. Using nested for loops instead of a series of JS array functions is way more efficient but it is not even close to worth it. Machines are so powerful these days also that sacrificing readability/maintainability for performance doesn't really make sense to me.
> Using nested for loops instead of a series of JS array functions is way more efficient but it is not even close to worth it.
i work with canvas rendering and 2M-datapoint arrays, so for me it's almost always worth it. but yes, for < 1K elements it isn't. i wrote a 4x faster version of _.groupBy() recently to process our datasets. is a hand-rolled function worth it for 100 elements? not really, but that's not our use case. so, as with everything in life, it depends!
generally i think advice that will always be applicable:
- learn and use a profiler before there are performance issues, not after (dont treat performance as an afterthought).
- internalize which patterns are faster and which are slower, and when it matters.
- for any runtime with a GC, reduce repetitive memory allocation and GC pressure. prefer shallow structures. mutation instead of immutability (thus, mem allocation).
- cache/memoize whenever possible.
- don't use algorithms that scale poorly with data size.
finally, beware of following any performance advice older than 6 months; JITs advance constantly, so make sure to re-bench/measure continuously to avoid doing unnecessary refactors, and test with real code; there are lies, damned lies, and micro-benchmarks.
at work we use React, though i tend to work on lower-level JS code and don't have to touch it very often.
my OSS code (almost 100% libs) is vanilla JS with zero deps.
for my own projects i've been moving to fine-grained reactivity libs, like Solid or Voby: https://krausest.github.io/js-framework-benchmark/current.ht.... the numbers here are a bit misleading since the benchmark is dominated mostly by DOM layout/rendering, so a difference of even 10% is actually quite significant because it's typically pure JS / GC overhead.
React has many benefits for large, diverse teams (e.g. ecosystem, hiring, docs/google-able answers), but performance is not one of them; it has many performance footguns and landmines, especially with hooks.
as someone who spends a lot of time in a profiler, this resonates with me. the irony that i work in JS/TS is not lost on me, but most React apps make me sad, most node_modules make me sad. V8/JSC/SpiderMonkey are amazing JITs, and seeing them get bogged down by inefficient JS is painful. i see many devs jumping to Web Workers or even WASM when in fact their existing JS and algos can be orders of magnitude faster with just a tiny bit of forethought.
> it is the more personal pain of seeing a 5% GPU utilization number in production
yikes, is it that low?