You're making the same mistake the post did. It depends on the reader already having sympathy for the idea that bloat is bad in order to make its case. I can read nerd site comments all day that lament bloat. For an article to stand on its own on this point it has to make the case to people who don't already believe this.
Dan's articles have usually been very good at that. The keyboard latency one for example makes few assumptions and mostly relies on data to tell its story. My point is that this article is different. It's an elevated rant. It relies on an audience that already agrees to land its point, hence my criticism that it's too couched in internet fights.
State your case that bloat is good. I currently have a client who will do literally anything except delete a single javascript library so I'd like to understand them better.
Due to prevalence of native apps in the macOS world, the difference are often stark. I use Things and Bear, and it’s fast, then try to load gmail (dump account, so it’s not in Mail) and it’s so slow. Youtube too. Fastmail, in comparison, loads like it’s on localhost.
You block JavaScript and the amount of sites that is broken is ridiculous, some you would not expect (websites, not fullblown interactive apps).
My point is to reply to "State your case that bloat is good" with a famous blog stating a case that bloat is good. Bloat makes the company more money by allowing them to develop and ship faster, bloat makes the company more money by being able to offer more features to more customers (including the advertisers and marketers and etc. side of things), and - well, read the article.
I, too, dislike slow websites and web apps, but I don't think they are some mystery - natural selection isn't selecting for idiot developers, market selection is selecting for tickbox features and with first-mover-advantage they are selecting against "fast but not available for another year and has fewer features and cost more to develop".
Core frequencies aren't going up at 2001 rates anymore. (And although Moore's law has continued, it is only just. Core freqs have all but topped out, it feels like.) Memory prices seem to have stalled, and even non-volatile storage feels like it's stalled.
My computer in 1998, compared to it's predecessor, storage was going up in size at ~43% YoY. It was an amazing time to be alive; the 128 MiB thumbdrive I bought the next decade is laughable now, but it was an upgrade from a 1.44 "MB" diskette. Today, I'm not sure I'd put more storage in a new machine than what I put in a 2011 build. E.g., 1 TiB seems to be ~$50; cheaper, yet. Using the late 90s growth rates, it should be 17 TiB… so even though it's about half the price, we can see we've fallen off the curve.
> "And although Moore's law has continued, it is only just."
https://en.wikipedia.org/wiki/Transistor_count has a table of transistor count over time. 2001 was Intel Pentium III with 45 million transistors and nVidia NV2A GPU with 60 million. 2023 has Apple M2 Ultra with 134 billion transistors and AMD Instinct CPU with 146 billion, and AMD Aqua Vanjaram CDNA3 GPU with 153 billion. That's some ~3,000x more, about a doubling every two years.
Core frequencies aren't going up, but amount of work per clock cycle is - SIMD instructions are up, memory access and peripheral access bandwidth is up, cache sizes are up, branch predictors are better, multi-core is better.
> "E.g., 1 TiB seems to be ~$50"
You can get a 12TB HDD from NewEgg for $99.99, Joel's blog said $0.0071 per megabyte and this is $0.0000083 per megabyte, ten thousand times cheaper in 23 years. Even after switching to more expensive SSDs 1TB for $50 is $0.00005 per megabyte, a hundred times cheaper than Joel mentioned - and that switch to SSDs likely reduced the investment in HDD tech. And as you say "I'm not sure I'd put more storage in a new machine than what I put in a 2011 build" few people need more storage unless they are video or gaming enthusiasts, or companies.
My comment explicitly notes this, and that I am not debating that transistor counts have continued to follow Moore's Law. They have. That's not the point.
> Core frequencies aren't going up, but amount of work per clock cycle is
[Citation needed]; this absolutely doesn't match my experience at all.
> You can get a 12TB HDD from NewEgg for $99.99
I looked at NewEgg specifically before I made that comment. (But for the pricing for 1 TiB, as that was comparable.) 12 TiB runs $250–400, with the absolute lowest priced¹ 12 TiB (internal desktop form factor) HDD being $201. So no, you cannot.
¹and the "features" of this "12 TB" HDD include "14TB per drive for 40% more petabytes per rack" (wat) "Highest 14TB hard drive performance" (wat)
The reason we have bloat is it's easier to satisfy stakeholders if you don't give a damn. There's really no reason to discuss this at all once you realize this.
But of course, ranting and reading rants is satisfying in its own right. What's the problem?
I think the article makes a pretty good case for bloat being bad for low-end users actually. His analysis demonstrates how many websites become genuinely unusable on cheaper devices, not just to techie standards, but for anyone trying to actually interact with the page at all.
Dan's articles have usually been very good at that. The keyboard latency one for example makes few assumptions and mostly relies on data to tell its story. My point is that this article is different. It's an elevated rant. It relies on an audience that already agrees to land its point, hence my criticism that it's too couched in internet fights.