Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It might be, depending on exactly what you'd like to implement. There are two things at play here:

1. The memory that holds the resource. 2. The resource itself.

C++ RAII lets you control the lifetime of both. In a GCd language, the GC takes control over (1) but can't, over (2). Instead it gives you the ability to do 2 yourself (much like C++ does using custom destructors) using finalizers. Due to the way a GC works though, it is hard to be able to say when (or in the context of which thread) your finalizer would run. Now, the GC does what it does so it can guarantee that you are never able to hold a pointer to something that some other part of your application has deemed dead. While it is impossible to take over (1), you could imagine a world where (2), in a GCd environment is still up to the programmer, and is exposed (say) as a language destructor. In the presence of such a feature you run the risk of committing errors such as (say) closing a socket twice - but it isn't much different from say, code calling a poorly implemented Close (IDisposable) on an object twice.



You never rely on finalizers to clean up resources. At best, you check that the resource is properly released and signal an error otherwise, because you should have released it earlier (you have a bug). That may happen when you manually manage your resources, but most of the time, you only need to use a resource inside a delimited block (defer, finally, unwind-protect, with X). If I understand correctly, this is how RAII works too when you allocate objects on the stack.


You're missing the point. The original question (and hence my answer) isn't about whether or not it is a good idea to use finalizers. I'm merely pointing out that both in the case of RAII and a GC, only the lifetime of memory allocated is being managed. Not the resource contained in it. They use different mechanisms to let the developer deal with the resource. In the case of RAII the time at which clean up code is executed is deterministic (destructors), whereas with a GC, it is not (finalizers). Said mechanism, can be implemented, should a language choose to, regardless of whether or not the underlying memory allocation scheme is automatic (GC). (IDisposable is mostly a made up thing that C# has syntactic sugar for, that lets developers eagerly release resources (other than memory) when they're done with it. My point is, neither the language nor the runtime makes any effort to enforce its usage, like it does in the case of memory allocation.)


You are doing it wrong if you want to use a finalizer / destructor. The right thing to do is to have the container manage the lifecycle for disposable resources. In case they are transient then it's up to you to remember about "using" or "Dispose". I can't see how you can dispose twice something by error if you implemented correctly the disposable pattern.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: