I'm fascinated by how many examples of this kind of issue there seem to be. Seems like the kind of thing that should be fairly obvious in code review, or should be caught by unit tests.
No contributor writes consistently (truly) thorough unit tests and no reviewer performs consistently thorough line-by-line analysis.
It’s extremely valuable to have multiple layers of verification, and fast-cheap static analysis tools like linters have a tremendously high ROI as one of those layers, especially in languages with many subtle syntax surprises.
Hey, author of the rule/post here. I'd encourage you to click through to the actual examples linked from the post. Seeing the issues in context, as opposed to the minimal example, can help show how quickly these issues can get lost. It might also be interesting to click "blame" on the line and look at it in the context of the PR that added it.
Overall, my point with the examples was to highlight that these are mistakes that even make their way into high visibility projects built by highly competent engineering teams.
That said, looking at the issues few were in really critical paths of these projects. Often they cropped up in auxiliary areas like test harnesses or more off-the-beaten-path features. One can assume the same bugs may have existed at some point in the development cycle in other areas of the code base, but they got caught by more rigorous testing/review of those areas, or bug reports. But it's surely a time saver to identify them _as the developer saves the file_ rather than later in the process. The sooner you catch the bug, the more engineering energy you save.
I love that framing. I’m the author of the rule/post and I see writing rules like this as an opportunity to mentor at scale. Incredibly rewarding to think that, in a sense, I can be in so many engineer’s editors helpfully pointing out (and via documentation explaining) issues right at the moment that the developer needs it.