It's not like destructors actually remove the complexity. The extra function call you need to add to replace the destructor is already present in your program; it's merely hidden from view. If you are trying to verify the code you have written (either informally or formally) and want to consider all paths through the program, then you need to include the invisible control-flow created by the compiler for destructors. I don't see how it gets any simpler by not being written in your program.
If you take the view that code is read more times than it is written (which is lightly amusing given the topic of this thread), then shouldn't you optimize for the ergonomics of reading and understanding code that is already written rather than writing it? I don't see how Rust's choice is defensible from that viewpoint. If I quizzed actual Rust users (rather than the language developers that are posting here) about the invisible control-flow created by the compiler, I doubt very many of them would get it right.
Code only gets read if it gets written in the first place, which it won't if your fresh new language makes it too hard to bootstrap an ecosystem and a community. One of the foundational premises of this year's ergonomics initiative is that Rust code is still too hard to write; a Rust-like language with linear types instead of destructors would be yet harder to write, and would only improve the ability to reason about code in a small minority of cases. I've been writing Rust code since before it was cool to claim you were doing things "before it was cool", and while I appreciate the cases where people have longed for linear types, I have never needed anything more than what destructors provide (which subverts the very premise of your comment, because, for my uses, enforcing linear types would be harder to both read and write). Nobody on the Rust team is pretending that there aren't tradeoffs.
Usually programmers don't care about the exact order that things are freed in. That's the whole reason why GC's have been so successful in programming languages (not to mention things like ARC, which are still deterministic but obscure).
I get that occasionally it's important to know the exact order, and this is where tighter rules and tooling can help. Rust wasn't designed to be an "everything is as explicit as possible" language. (Neither is C, for that matter, ever since compilers stopped paying attention to the "register" keyword…)
GC lets you write functions that return heap allocated values without worrying about who's responsible for freeing the result. That in turn leads to functional code over procedural, which in turn leads to simpler and easier understood code.
That is, it's less about order and more about bookkeeping. The ergonomics directly affect code quality.
Perhaps this is obvious to everyone else, but it isn't to me. Are you advocating explicitly dropping every variable I create? Like `init; use; drop;` explicitly where right now I essentially do `init; use;` and then there's an implicit drop when we exit scope?
You would be required to drop values that aren't moved elsewhere. Currently Rust has a one-bit reference count which dynamically tracks which values need to be dropped. Linear types would transform this into a static property that the compiler checks.
Part of the pain would come from conditionals:
if something {
func(val1);
} else {
func(val2);
}
This would be disallowed because the liveness of val1 and val2 can't be statically known after the conditional.
If "val1" and "val2" are really things that have to be used exactly once, then the above code is already in error (unless they are provably the same object), and a compiler with linear types will correctly complain about it.
This is indeed "pain", but of a good kind (at least it if the error message is decent).
I admit there are harder cases:
if(foo)
func(val1)
else
func(val2)
... do something that doesn't change foo or use val1, val2
...
if(!foo)
func(val1)
else
func(val2)
Is correct, if strange, code which I assume is hard for a compiler to understand. But then it is hard for humans too.
I guess it is only for ones with nontrivial destructors.
That makes the "trivialness" of a destructor part of the interface, which is a price. But then from my experience in C++, we pay that price anyway, because to have any sort of assurance about what you are doing, you need some clue about what the destructor does.
Is GC less ergonomic than C-style malloc/free? I don't think so, and not only when writing code. It makes things easier all around.
(Yes, GCs require tuning and so forth, and that can be a pain, but so does malloc and free, so that's a wash. In most applications you never need to manually tune a GC.)
Keep in mind today Rust is intended to be a better C++. (For loose values of 'intended'.)
Destructors are a common resource management idiom there (although I guess most C++ programmers couldn't describe the behavior correctly), and there are no other really common idioms for that.