So, yes, if you have a million objects and need to release all of them except 1 then Cheney's algorithm does exactly 1 operation compared to 999'999 operations the reference counting does. Ergo, GC is faster than reference counting. It can even be faster than stack allocation [0]!
It's so frustrating and time consuming replying to non-rebuttals like these. I'm going to have to cut it off after this one.
> if you have a million objects and need to release all of them except 1
That literally only happens at most 1 time for each allocation during its lifetime. But the GC can still waste its time traversing that object a billion times before then, despite nobody adding or removing references to it. Whereas RC would only touch each allocation when you're actually doing something to the object.
It's like arguing for wasting several decades of your life, just to make sure your funeral is quick...
I am still not sold that e.g. using an arena as a scratch space for real-time frame rendering of the scene is worse than refcounting all of the temporary data, especially if the workload fits comfortably into the available memory.
Also, generations are a thing, so no need to "waste several decades of your life". And why is that metaphor applicable anyhow? Why not something like "stop constantly opening and closing the warehouse door during the work, and reinventorize it once a year instead of making a note every time you take something out of it or put something back in it" instead?
No, it's not. M could be 1 and N could be a million. The entire point of O(M) is that it's independent of N.