Performance characteristics depend where on that continuum your workload falls. For example, Erlang/BEAM uses a generational GC for most common heap objects, but refcounts large binary blobs. This is pretty much a perfect case for refcounting: new references are created infrequently, copying or moving is expensive, destruction is deterministic and happens immediately after the last reference disappears, and there're no pointers within the blob that would require tracing or cycle-detection.
Similarly, UI components within a GUI is another good case for refcounting (and presumably why Apple continues to use this for Objective-C and Swift in Cocoa). New references happen only in non-performance-critical code, most data remains live across collections, and copying/moving existing data would a.) be slow and b.) would invalidate any C pointers into contained data.
Sounds like the particular problem domain described in this article is one where heap allocations are frequent, which makes generational GCs more appropriate. That's probably the case with the vast majority of computational algorithms, but there are definitely problem domains where refcounting continues to beat GC.
https://www.researchgate.net/publication/221321424_A_unified...
Performance characteristics depend where on that continuum your workload falls. For example, Erlang/BEAM uses a generational GC for most common heap objects, but refcounts large binary blobs. This is pretty much a perfect case for refcounting: new references are created infrequently, copying or moving is expensive, destruction is deterministic and happens immediately after the last reference disappears, and there're no pointers within the blob that would require tracing or cycle-detection.
Similarly, UI components within a GUI is another good case for refcounting (and presumably why Apple continues to use this for Objective-C and Swift in Cocoa). New references happen only in non-performance-critical code, most data remains live across collections, and copying/moving existing data would a.) be slow and b.) would invalidate any C pointers into contained data.
Sounds like the particular problem domain described in this article is one where heap allocations are frequent, which makes generational GCs more appropriate. That's probably the case with the vast majority of computational algorithms, but there are definitely problem domains where refcounting continues to beat GC.