The author provides some "extreme" examples of bad ergonomics, and posits there are milder examples of similarly bad ergonomics. But their examples aren't on the scale of "ergonomics", they're on the scale of "weakening language guarantees".
Instead of writing an essay with (intentional) strawman examples and then saying "trust me this could happen with any new ergonomics" is much less persuasive than just bringing up any legitimate concerns on offending ergonomic features. This essay is kind of arguing against no one and everyone at the same time: No one disagrees that automatic unwrapping is a bad idea, but it's ridiculous to say that "I reach more ergonomics by accepting more programs and hoping they do what the author have meant, so I don’t have to bother the author with a compilation error." This isn't arguing against ergonomics, it's arguing against. . . I don't know, making Rust not Rust anymore?
I think a more charitable read is that Vorner is urging caution. Ergonomic changes are the "most prominent language work in the pipeline"[0] and there does seem to be a general consensus that Rust is harder to learn and should be made easier. The argument is that Rust should keep its priorities straight and not be seduced by ease of use to the detriment of the language's advantages.
But I agree that the danger remains hypothetical unless we can point to some dubious proposals that are gaining traction.
I agree with your priorities, but I want to stress that don't think there's any reason to believe the people working on the language are confused about this. The ergonomics initiative has explicitly been about making certain things easier without introducing ambiguity or jeopardizing the safety guarantees Rust offers.
Like the OP says, the article is misguided, and if it has any effect, it will create FUD in people who have not followed the ergonomics proposals enough to know better.
I was part of a couple ergonomics discussions and I disagree.
Yes, in the end the changes don't sacrifice long-term maintainability over learnability/marketing, but that was a hard fight. And given editions will allow radical changes to the language and ecosystem every 3 years, I'm not sure how long that resistance is sustainable.
> And given editions will allow radical changes to the language and ecosystem every 3 years
This is incorrect. Editions are incredibly limited in what they can introduce (it needs to be something that it is possible for the compiler to warn for, with no false positives or false negatives). In practice this means that they're mostly limited to introducing new keywords and other little bits of trivial syntax.
An example would be the talk about removing the ability to declare reference bindings with `ref` and `ref mut` and always inherit whatever the source mode was.
Edit: There is also talk about changing the prelude, so I would assume the standard library is also fair game.
It's not a misconception, there are plenty of times in the past year that I've heard someone propose some change for the new edition only for the Rust devs to meet it with "that's not possible because we can't infallibly warn for it in the compiler, so there's no point discussing it". Furthermore, just because we can imagine things that could be broken, it doesn't mean that the Rust developers interpret that as carte blanche to break things. The history of Rust development since 1.0 has been one of careful, gradual, and conservative change. It appears that you have a lot of fear about various things that you imagine might be changed (such as ref and ref mut in patterns, which is sort of head-scratching because I've never heard anyone propose removing them, and it wouldn't make sense to remove them anyway), but I think such fears are unwarranted. I've been observing the Rust developers at work longer than almost anyone, and I've ended up with a degree of trust in their sense of taste that rivals that of any other project that I've seen. The people behind the language today are the same ones who have been behind the language for years now, so if you happen to like where their influence has led the project so far, I think it's reasonable to trust that they won't abruptly leap off the rails. :)
> It's not a misconception, there are plenty of times in the past year that I've heard someone propose some change for the new edition only for the Rust devs to meet it with "that's not possible because we can't infallibly warn for it in the compiler, so there's no point discussing it".
Do you have examples for that?
Edit: I'm not doubting you, but having precedent of things that were rejected due to breakage it causes might be helpful in some current/future discussions :)
> Furthermore, just because we can imagine things that could be broken, it doesn't mean that the Rust developers interpret that as carte blanche to break things. The history of Rust development since 1.0 has been one of careful, gradual, and conservative change.
My fear is that this is currently changing. There are a couple of threads on the internals forum currently collecting ideas of what people want changed in future epochs. To me, that's a shift to a different mode of operation. From "editions allow breakage as last resort" to "editions make breakage no longer a big issue".
> It appears that you have a lot of fear about various things that you imagine might be changed (such as ref and ref mut in patterns, which is sort of head-scratching because I've never heard anyone propose removing them, and it wouldn't make sense to remove them anyway)
The idea also seemed like a big hit in #rust-lang at the time. People were quite gleeful that `ref` could go away at some point.
I'm happy it's no longer on the table, but having to fight for the ability to destructure into mutable and immutable felt wrong.
Same thing with the modules redesign. Having to fight that hard for your existing workflows and use-cases not to be broken through multiple RFCs felt wrong as well.
Having some constructs auto-convert their final result value seems certain at this point.
> but I think such fears are unwarranted. I've been observing the Rust developers at work longer than almost anyone, and I've ended up with a degree of trust in their sense of taste that rivals that of any other project that I've seen. The people behind the language today are the same ones who have been behind the language for years now, so if you happen to like where their influence has led the project so far, I think it's reasonable to trust that they won't abruptly leap off the rails. :)
I've been following Rust since 0.10 or so, so I've been there quite a while as well. I was there for the uint wars, the battle of match ergonomics, and of course the big module rework skirmish.
I'm sure nobody is acting out of malice, my fears are more that with a certain community size, it is easy to be drowned out if you have different needs or opinions.
Interesting. I guess you’re closer to the action than I am. I didn’t participate, but liked the proposals I read (I think I saw a lot of them by way of core contributors.
Would you mind sharing some of the proposals you thought were dangerous and whether you felt like they were close to being adopted?
The interesting (and to me frustrating) part is that often good improvements (or intentions to solve actual problems) come together with far-reaching changes in an all-or-nothing kind of way. Some examples:
* I'm still of the opinion that moving `&mut T` to `&uniq T` pre-1.0 failed because it also proposed getting rid of `mut` alltogether and be mutable by default.
* Everything that relates to errors often also tends to accumulate a push to make things look more like exceptions. That applies to `?` short-ciruiting, the current discussion about limiting the scope of that short-circuiting, and anytime auto-converting final-value result types comes up. (and it currently looks like that short-circuit construct will auto convert its final value when it arrives).
* Making paths saner and modules and visibility more intuitive first started out proposing completely removing the option of being explicit about project structure, and it took multiple rounds of RFCs to keep that control.
* The match ergonomics discussions hinted at a wish by some to get rid of `ref` and `ref mut` patterns which is what allows destructuring a mutable reference into mutable and immutable bindings.
It seems to be that this is one of those fights where I (as a Rust developer) would prefer that both sides fight as hard as possible. An article like this, by suggesting that the "ergonomists" move more toward the mindset of the "safety people", could cause undue wins of the "safety people" over the ergonomists, when what should have happened is a careful compromise.
Personally I wish it wouldn't have to be a fight, but Github issues are not well suited for large scale design discussions and are bound to produce certain amounts of grief.
I don't see anything wrong with automatic unwrap in principle. It's not really much different from how Rust handles array indexing (where you don't have to explicitly handle the case where the array lacks an element at the given index).
Rust provides two methods for array indexing. `my_array[i]` which panics if `i` is out of bounds, and `my_array.get(i)` which returns `None` if `i` is out of bounds. The same thing applies to `HashMap` etc.
For the vast majority of cases, it is an non-recoverable logic error to try to access an out-of-bounds elements, so panicking makes sense for the more common, more terse `my_array[i]` syntax.
For me, recoverability dictates whether or not to return `Option<T>` or `T-but-maybe-panic`. If a caller can reasonably recover (and in fact expects to get `None` some of the time!), then `Option<T>` is the right choice. If it is obviously an irrecoverable logic error, then maybe panicking is the right choice.
Yes, I'm aware of how Rust handles array indexing. I was just noting that autounwrapping wouldn't be any less safe than the existing behavior for array indexing using [].
I've only just started learning Rust but having spent 38 years programming I largely agree with his sentiments.
Typing .unwrap() forces you to acknowledge that what you have may not actually be what you expect.
Explicit flow control makes what is happening clear to understand and reason about without jumping about the code via exceptions.
Type conversions should also be explicit, (and automatic type conversions would needlessly complicate the Type Checker in any event).
I am sure there are places where "ergonomics" can be improved, but it is far more important to avoid the many (accidental) mistakes made by other programming languages.
Rust is going to be around for a long time. So any mistakes made now will also be around for a very long time.
Speaking of explicit conversions, it's always painful to have to do explicit conversions of integers. E.g. Have an u8 and want to use it as an index in a Vec? You have to cast as usize. That quickly becomes annoying.
On the contrary, I think that’s an argument against implicit conversions. u8 and usize have wildly different ranges, and treating them as the same could cause some maddening bugs. I can’t imagine there are that many places where you’d need to use a u8 as an index; if there were you could either wrap your data structure in a struct which accepts u8 instead, or use usize more instead...
(Not that I’m claiming to know your code base or specific challenge or anything; I’m just speaking generally)
> I can’t imagine there are that many places where you’d need to use a u8 as an index
I have some firmware on a small machine, there aren't any arrays with more than two dozen elements. On the eight bit machine the the code originally ran on using a 16 or 32 bit int caused a lot of code bloat. You might not think that's a problem but consider the price difference between a processor with 64k of flash and 128k might be a dollar. Times 100,000 units a year.
The above is why I'm not going to use rust anytime soon, because a rust binary size is about 4 times larger than C. That would add about $2-3 to the cost of the product. Or $200-300k a year for no real benefit at all.
That doesn't make any sense. If `i` is a `u8` and `xs` is an array, then `xs[i as usize]` works today.
The criticism isn't even specific to exotic environments. The same reasoning applies at bigger widths too. I've certainly used `u32` in places instead of `usize` to avoid doubling the size of my heap use on 64-bit systems.
Implicit widening would be nice, but it isn't necessary.
I think implicit widening is a good idea, but not narrowing --- expanding a u8 into a usize doesn't actually lose any information, but going the opposite way does.
I would argue Deref coercions are still explicit, because the trait implementations are explicit. There is not a magic mapping of types whose references can be coerced to each other, it is exactly the ones that implement Deref<Target=T>.
Type conversions, even non lossy ones, can teach people to use the wrong type, in c++ it's very common to see people use a int in a for loop indexing an array when you should always use size_t for that purpose. This misuse is so widespread that people hardly even know that the size_t type exists. https://www.viva64.com/en/a/0050/ has some nice material about why this matters and the type of bugs this can cause.
I dunno, "newtypes" are a fairly popular patterns, and if they automatically converted between the base type and other newtypes of the base type, they'd not really be useful.
It puzzles me a bit. For numerical software, exceptions work quite well: for performance, all allocations are in the constructors, so the RAII idiom fits naturally.
Using exceptions saves from writing a great deal of error-handling, which easily can have errors itself.
Has anyone seen properly-RAII-ed code where exceptions still have drawbacks ?
auto button = Button::create();
window.addSubview(button);
button.setLabel("test");
button.setAlignment(Alignment::TOP_LEFT);
button.sizeToFit();
If setLabel() throws an exception, you'll end up with a half-initialized button in the view hierarchy.
For exceptions to work predictably, all mutation within a try-catch block should be captured in a transaction that can be rolled back. Then you don't need to worry about each function possibly throwing an exception. Either the entire block commits its changes, or it's as if the entire block didn't happen at all.
Wouldn't this be fixed simply by calling `addSubview` last (and designing your UI toolkit in such a way that you don't need to imperatively call `sizeToFit` after adding a thing to a container, but implicitly sizing things correctly during layout)?
Sometimes whatever `window` and `button` actually are have references to each other (ex: parent/children relationships) so it's possible -- in fact it's surprisingly common that -- there is no correct order.
You need total knowledge of what can throw exceptions, and while the same is true for status codes, the raison d'etre for exceptions is shrinking the amount of error handling in your code. If you're checking for exceptions at the same rate you'd be checking for status codes, exceptions are pointless.
But generally, your suggestion to change the architecture points to what the problem with these "ergonomic" changes engender. You more or less never have to rearchitect or refactor things when using status codes, and rearchitecting and refactoring are error prone and generally hugely fraught. The cognitive load of things like exceptions leads to lower productivity or lower quality, because you only have so many brain cycles and something has to give.
I think ergonomic changes are helpful to get more dynamic programmers into stricter languages like Rust. But I think we should keep in mind that typing and boilerplate are really never our problems and prioritize accordingly.
Any solution that boils down to just get everyone who writes code you have to interact with to always write the code correctly is doomed to failure.
Rust tries to make it impossible in the general cases to do the wrong thing. And gives you an unsafe escape hatch for when you need to do something the compiler won't allow. The result is vastly safer code because the compiler guards against the most typical human failings.
That sounds like a natural fit for Rust's lifetime-tracking mechanism. The compiler can automatically drop all initialized objects upon exception. In fact, I'm reasonably sure it does just that for panic handling.
That example is initialization code, but you'd run into the same problem even if the control were already initialized (and you were trying to set new properties on it). In that case, you'd likely have a borrowed pointer, so it wouldn't be dropped when the panic happened.
The pointer can be dropped. The pointed object cannot. So the problem of rolling back the transaction changes ownership, now our button should do sensible things in setLabel, to rollback changes made before an exception.
imo the dream of exceptions is deferring error handling so happy-path code can be straightforward and free of worries, but if every line of code indirectly under a try-catch block has to embody transactional thinking to enable proper RAII behavior, that seems like a huge obligation.
So, that's the bad-stuff perspective. But transactions in actual DB's show it doesn't have to be that way, and transaction isolation is likely to be easier in a language than a DB because you typically communicate via a fairly narrow, well-defined channel, and not via a morass of side-effects.
Another good-stuff perspective are processes or equivalents (e.g. erlang's internal processes or docker containers). Happy-path code can ignore errors; sufficiently serious errors tear down the "process" and the calling code can decide how to deal with that (ideally without cleanup responsibilities).
As long as indirect effects are well-contained, there's nothing necessarily wrong with ignoring errors. Especially when fine-grained error handling simply isn't interesting, it can be a relief simply to ignore errors. E.g. DB style fine-grained locking is really only possible because you don't need to manually deal with each and every possible failure moment (because there are unbelievably many).
yeah i agree that seems like a generally desirable state of affairs.
not sure if i'd sign the statement "you typically communicate via a fairly narrow, well-defined channel, and not via a morass of side-effects" though ;)
The problem is that you can have half-initialzed objects at all. I don't know Rust very well yet, but in a typical OOP language it's a constructor's job to completely initialize an object, and exception in constrctor causes object to not be created at all.
Wouldn't you want, in a similar manner, a single create() call to be responsible for full initialization? An API that makes it possible to have entities half-initialzed looks like a design mistake to me. If an initialization is long and requires a lot of parameters, I personally split a "config" object that has easy access to all of them and the "real" object that swallows it in constructor or "create" call all at once.
If setLabel() throws an exception, the scope is left, cleaning up button, which then no longer exists.
The only try/catch blocks are where you can do anything usefull. That's typically only at the highest level of the application.
The button has to exist after leaving the scope, since the whole point is to create it and add it to the window. It can't remove itself when the function returns.
You could make this general idea work, though, by adding a "button.commit();" at the end of the function. The button destructor would remove itself unless commit() had been called -- basically an ad-hoc transaction on the level of a single button.
Looking at this from a Rusty point of view, ownership rules make this possible. When this function creates but button, it owns it. To mutate it (addLabel/setAlignment/sizeToFit), it has to own it. But for it to continue to exist after the function returns, something else has to own it. Therefore, Window::addSubview has to take ownership of the button, and so, has to come last:
I know it's a toy example, but an exception in such a method doesn't seem like something that can be handled/retried like an IO problem. Any such exception should probably be handled by simply tearing down the app anyway.
> Has anyone seen properly-RAII-ed code where exceptions still have drawbacks?
The drawback is "properly-RAII-ed code". It's hard to write because where you write the RAIIed classes you don't have the context of the place / places where it's used. It's extremely hard to read and step through with a debugger - because control flow is all over the place. I don't want to write classes instead of simple regular control flow.
And regarding standard control flow code, I've never seen a code base that handles exceptions correctly / elegantly / in a disciplined way. The only way to deal with exceptions is handling them immediately, i.e.
try:
f = open(filepath)
except FileNotFoundError:
handle_file_not_found()
To be honest I prefer C-style return code handling. It would be only 2 or 3 lines instead of these 4, and no additional indentation, and no additional value return mechanism. The only cost of C-style handling vs exceptions is duplicating a few library functions: Need to offer interfaces that die and ones that allow handling the error. And there's the possibility of forgetting to handle the error (if the caller considers it an error, after all) once in a long while. But you have that problem with exceptions, too...
In other words, exceptions are good for writing sloppy code (I like writing Python scripts, but I rarely handle exceptions), and very bad for serious code bases.
At least with exceptions, “forgetting to handle the error” results in a predictable bubbling of the exception, as opposed to continuing with an unknown state.
That’s already a property of rust errors. If a function errors you can’t use its return value until you handle the error some way either explicitly, forward it with the ? operator, or unwrap it indicating an unrecoverable error.
People misuse exceptions. They use them as control-flow, like another form of "if...then", and as you point out, they're terrible for that.
The proper way to use exceptions is when you want behavior that bubbles up. So if you do x=f(g(h(x))) and h(x) fails because the user didn't configure something, you only need the config-checking logic in h, not in g or f. This use-case is quite rare though, and overall not worth the extra language complexity.
If you don't mind me asking, what do you think of Go-style error handling where there are multiple return types: one exclusively for errors and however many for actual results. Is this better to your eyes?
I think there is not much difference. I used that approach myself once when writing a simple parser in python. I think it lacks some elegance, but it may be better than out-arguments (pointer argument to return value). On the other hand, the two-variables handling has the disadvantage that the "real" return variable is always (over)written, also in case of error.
I think there is one underused error handling mechanism: "asynchronous error handling" where you have to to ask for errors explicitly. Like in the OpenGL API, for example. It might be the best choice where there is a lot of communication between largely isolates states.
So that could be one answer: If possible design the API in such a way that functions that can have expected errors [1] do not return values that only make sense if there was no error. Because this means that control flow gets very irregular on the calling side. Such a design could be hard to do though, I don't know.
The other answer could be: Don't have errors that you don't know how to handle. Unless you write very low-level code, like a library (which means you probably want to delegate the handling), just die on errors that you don't know how to handle. Make the die() call a best-effort cleanup routine if possible, and keep the important to-be-cleaned up state as global as possible (NOT on the call stack), to support such a cleanup routine.
[1] I hate the vagueness of that term, but take it to mean "no statically obvious wrong usage by the caller", like passing in made-up or corrupted values
Cute example but what exactly does your handle_file_not_found() function do? Isn't it the caller further up the stack that knows best what to do? Assuming your example code is inside library_function_foo() and I'm a web server calling into library_function_foo() i know i should return http code 500 if "anything" throws an exception, if i reuse library_function_foo() in another command-line client project i might want it to print the error message to stdout instead.
> Isn't it the caller further up the stack that knows best what to do?
Often yes (one of the exceptions being a simple die() routine that cleans up only global state). It's just pseudocode, and in real code you will often not call any functions in the exception handling block.
Sure, I know Haskell too, and that code above is a mouthful... I'm undecided what is worse in terms of "control flow is all over the code base" and "code is hard to understand" - C++ RAII or Haskell error handling. Moreover the latter comes in a thousand incompatible flavours. Have fun writing Monad transformers :-). It's next to impossible to engineer a robust and readable/maintable application this way (besides simply dying on errors).
If you think it's all wrong then explain why. If all you want to do is signal in-group status or cargo-cult thinking then you'll get downvoted to oblivion. Hacker News has a pretty strongly developed immune response to this sort of comment.
The dislike comes from the fact that some people abuse exceptions as a flow control mechanism. This is due to the "everything is a nail if you have a hammer" problem: exceptions and RAII is a resource lifetime control mechanism that every modern and sane language should have, but very few programmers are ready to reason in terms of resources and lifetimes instead of instructions.
I think part of the insidiousness of exceptions is that we're tempted to think there's a way to use them that's not as a flow control mechanism. And there isn't. That's what they do: _alter flow_.
A sibling comment hints at the way out:
> For exceptions to work predictably, all mutation within a try-catch block should be captured in a transaction that can be rolled back.
The instant an exception can be raised from an area in code is the instant you need to make that entire area into functionally-pure side-effect-free logic, so that you can reason about it even when the exception is raised.
And then you've got a function that could just as well return an Either<Result, Error> -- what a strange "coincidence"!
> The instant an exception can be raised from an area in code is the instant you need to make that entire area into functionally-pure side-effect-free logic
I mean, that would be lovely, but is really only feasible for exceptions raised from code that is operating exclusively on values in memory (NPEs, for example). Rather a lot of exceptions are raised from necessarily side-effectful operations, especially those dealing with I/O.
Since you can't predict the future, you can't guarantee purity/transactionality of code that raises that kind, so saying "code should be pure when it can throw" is a true-but-impossible statement. That's why most languages with an emphasis on purity tend to eschew exceptions and/or invert control flow such that effectful stuff happens externally to the functions handling its output (e.g. the Maybe or IO monads).
This is correct, of course, but partial progress here is massive progress indeed.
The further you push the potentially complicated code towards functional purity, the more the I/O and other side-effecting/hard-to-roll-back things float to the top... and it becomes almost natural to group all of them together...
... into what you might almost be tempted to call a "transaction".
(Indeed, like the IO monad leads one to do. I've explicitly been trying to avoid saying the "m" word, though, because it seems to be a cognitive stop-sign for a lot of folks, for whatever reason.)
>>The instant an exception can be raised from an area in code is the instant you need to make that entire area into functionally-pure side-effect-free logic, so that you can reason about it even when the exception is raised.
Great insight. But i would say that explicit error handling doesn't help you much there. Side effects are hard and side effects can by nature normally not be reverted anyway. Deleted a file? Tough luck getting it back. IO and external interfaces could also be an issue, wrote 50% of a file to USB-stick when the user yanked it from the port? No amount of explicit error handling will help you remove it there.
They aren't, though. COME FROM can come from anywhere; catch can only come from somewhere an exception actually got thrown. That is a night and day difference.
That is a difference without a practical consequence. The reality that unless you have only checked exceptions then effectively any line of code could throw an exception. And if that is so then you have to assume that the next line of code may not run. Just as Come From would cause the flow of control to mysteriously jump on you exception could as well. You are right that at least the stacks properly unwind with exceptions which disallow a certain class of bugs but the there is still a large body of bugs around the codes semantics itself that could occur.
Your best bet to maintain a clean state with exceptions is to never catch them and just let the whole process die. Otherwise there will be a never ending stream of bugs that your maintenance programmers will be dealing with. They'll be cursing exceptions the whole time.
Speaking of ergonomics, I seem to write a good deal of this for short-circuiting in Rust:
let x = match x {
Some(x) => x,
None => return None,
}
Rust doesn't have any other way to short-circuit and it gets pretty tedious.
I think Kotlin's named returns are most ergonomic of all. That way you can just short-circuit from anywhere whether in the middle of a fold or a deeply nested iteration.
fn foo() {
for x in xs {
for y in ys {
if y == 42 {
return@foo 42
}
}
}
}
The other thing I'd like for other languages to steal for Kotlin is how everything has the .let/.apply/etc. methods.
I'm rusty so this is more pseudocode, but these methods basically let you bring your own chaining to any value.
let y = x.let { x ->
x + 100
}.apply { x ->
println(x)
}.let { x ->
x * 2
}
Once you use it, it feels pretty silly that library authors in other languages have to deliberately design a chainable API for the same ergonomics. ...And infuriating when you want to chain some more in other languages but you've run out of chain context.
Like when you use `.unwrap_or()` in Rust but still want to continue chaining on to the value. But can't because it's not in a container anymore with a chainable API.
let x = maybe
.map(foo)
.and_then(bar)
.unwrap_or(42);
// ugh, want to apply a function to the result so
// far but can't use `.map(to_base16)` because
// it's been unwrapped into u32.
Just some things from my wishlist for the next language someone makes.
Well, you may be short-circuiting for any reason, from any signature.
For example, look a Swift's guard statements. It even narrows the type down-scope after you short-circuit.
If you borrow in the `match {expr}`, Rust still thinks it matters when you're trying to short-circuit in the `None` branch and will complain about the borrowed lifetime. Though this is something #![feature(nll)] fixes.
You can implement std::ops::Try for a custom data structure, but that doesn't help you in any of the cases where you just want to do some day-to-day short-circuiting without inventing your own data structure.
I shouldn't have used Option, though. It was a bad example.
I only implement let and also from Kotlin, because Rust doesn't really have the idea of method receivers in quite the same way as Kotlin, so the others are pointless.
let y = x.let { x ->
x + 100
}.apply { x ->
println(x)
}.let { x ->
x * 2
}
I think you've just reinvented imperative control flow. Isn't your example just modeling a reusable function that represents a series of instructions on a value?
func(x) {
let y = x + 100
println(y)
let z = y * 2
return z
}
There was one in 2017 [0], and work started back then is going to be continued over the year [1]. It's not clear to me if there are going to be any new ergonomics proposals though.
The process is open for anyone to submit proposals so nothing stops wacky ideas to end up in rfc’s. That does not mean they have a realistic chance of being accepted.
Nothing wacky was even close to being accepted.
The weirdest proposal that made it into rust was auto deref and I think that’s a question of balance. I am on the side of the advantages outweigh the disavantages and rust without auto deref would not be a sane language but maybe some other patterns would have emerged.
I think no. The stuff the op is describing is mostly not on the ergonomics agenda (I was thinking "strawman" while reading this), except for possibly limited automotic cloning for trivial types like `Rc`. It doesn't seem to describe the team's actual thinking about ergonomics. The Rust team is very aware of the importance of writing foolproof correct code in Rust (that's what it is designed for), and unless some major shift happens, I doubt they would make any major correctness sacrifices for the sake of ergonomics.
Think of a tool, like a hammer or a screwdriver. Its handle must have the correct shape for a comfortable and strong grip; a handle that's too thick, too thin, or oddly shaped, will be hard to use. The tool's shape must not force you to hold it in a strange position. The tool's length must be adequate for the task; neither too long, nor too short. Now apply that to a software tool, like a compiler.
I think when people talk about ergonomics of a programming language, it basically means "convenience", but more dramatic.
If your language offers a good way to express the problem, but it's inconvenient (too much typing to write it out, makes you think too much about inessential problems, fragile and needs to be adjusted when other code changes, ...), the total cognitive overhead (TCO? heh) might be higher than if you just used a different, worse way to express the problem, thus potentially making really cool language features completely useless because people won't want to use them.
it's kind of a subjective judgment. it's easier to define bad ergonomics, I think -- it's where the way part of the language is designed requires you to write very contorted, strange code to accomplish a task that ought to be simpler.
you know it when you see it. in whatever languages you know, look for places where there's a lot of tedious repetitive typing, and deep nesting that seems frustratingly unnecessary.
if you know Java, working with exceptions in Java has terrible ergonomics, bad enough that people come to resent the entire exception system even though it has many nice qualities.
I agree with the overall sentiment, that ergonomics should not come at the cost of less safety or correctness. However, two of the three examples, I would actually consider to be _unergonomic_.
Specifically, empty/null values and exceptions. For small code examples they might seem ergonomic at first glance, but from esperience with large projects in scala, java, and c++. The actually add writing code more difficult (at least if you care about handling error cases at all).
Regarding null, I think it is much more economic to work with an Option or Maybe type, with operations like `map`, `unwrap_or`, and pattern matching than checking if values are null all over the place. Not to mention that if values can be null, you have to know the details of any function you call to know if you need to worry about null values at all. See https://www.lucidchart.com/techblog/2015/08/31/the-worst-mis... for more on how terrible null is.
Likewise with exception systems, in large codebases you end up with try/catch statements all over the place, because it's hard to know if the functions you are calling will throw an exception, and even if you know they don't _now_, maybe someone will add a throw later. With explicit result types, you know if you need to handle errors or not, and that won't change unless the signature of the function changes.
As for implicit type conversion, the example in the OP is obviously bad. But I don't think it sacrifices safety to allow implicit conversions for widening integer or floating point types, such as widening u32 to u64 or f32 to f64 (but not u32 to i32, i32 to u64, etc. I think u32 to i64 would be ok, but I'm not 100% sure).
To be fair, the C++ noexcept feature isn't as powerful as it could be. There are languages with compile time checked exceptions, personally I prefer that to propagating every exception manually.
I'm not really up to speed on what the Rust team is thinking, but basically no. These are not the kinds of things the Rust team would do to improve ergonomics. Correctness is forefront in Rust. The only thing I can think of like this op that has seriously been considered is automatic cloning for trivial types like `Rc`.
The Rust team thinks really hard about correctness before making any decisions and isn't going to add any ridiculous footguns to the language unless there's some organizational catastrophe that puts monkeys in charge of decision making.
* `?` can propagate values/short circuit control flow, but it also contains a hidden type conversion. The target needs to be a known type for it to compile.
* A proposed `catch` construct will include implicit type conversion for its result value.
Some things that came up in the past but met resistance:
* Removing project structure from in-code to being defined by the filesystem.
* Hiding `Result` types in signatures with special syntax.
* Dropping immutable by default, though this was pre-1.0.
Also, `Rc` and `Arc` are good examples for this. They do seem trivial, but having them auto-clone would mean:
* Every newcomer who writes `fn foo(val: Arc<Bar>) {}` instead of `fn foo(val: &Arc<Bar>) {}` or `fn foo(val: &Bar) {}` will get an implicit atomic increment/decrement at every function call.
* It also makes it a lot harder to reason about code if you want to use a clone-on-mutation-unless-not-shared strategy.
I think RC examples are very good, but in the case of special syntax for Result, isn’t that a case where the two options are semantically the same, you just are wary of the less verbose/loud option (I have withoutboats’ post in mind here)?
Now I haven't programmed enough rust to know if there's a real problem here, but I worry about things like NLL. It means that the borrower checker accepts more programs which is good, but it also means that the model you need to have of its behaviour to correctly predict how it acts is a bit more complex. There's a trade off there.
The code in the compiler is more complex, but the domain is now more fully covered, so there is less understanding needed for the programmer.
It now simply understands your intention, where before it would not. So previously you would write some code, be surprised that it does not compile, gain a thorough understanding of the limits of the borrow checker, write a workaround to appease the checker. Now you write the code, expecting it to be correct, and the borrow checker will agree.
> It now simply understands your intention, where before it would not. So previously you would write some code, be surprised that it does not compile, gain a thorough understanding of the limits of the borrow checker
Much more important than it understanding your intention is you understanding its limits. If it understands your intention 10% more of the time, but it's harder for you to understand what's going on when it doesn't, then that is not something everyone is going to find simpler.
Another word that gets applied to systems that try to guess at your intentions rather than operate according to an easily predictable model is 'magic'.
You would be right if it did guess, but it doesn't. It's not magic, I'm not sure if it still has limitations, but I think the idea is that it eventually does not have limitations.
It's not magic when a compiler compiles your correct code.
I would argue that it's actually the opposite. To understand how borrows work right now takes more understanding of how the checker operates than with non lexical lifetimes.
I think most of the ergonomics issues can be addressed by improving the tooling without needing to pollute the language. An IDE could provide snippets and autocomplete for common patterns.
I'd say Rust's first line of solutions for this is macro_rules. They are structured, allow common patterns to be re-used in various places, can be reviewed and maintained in one place and apply to all uses, they can even be distributed via crates.io.
> Typing of the .unwrap() makes me acknowledge it could be None. You know, the effect "Oh crap, I better handle that too, right?". It forces me to fix my mental model, and not write the bug, instead of fixing the bug later on.
How does this work for Mutex::lock()? Do you actually try to handle the case where taking the lock fails, because it has been poisoned?
For a mutex, this means that the lock and try_lock
methods return a Result which indicates whether a
mutex has been poisoned or not. Most usage of a mutex
will simply unwrap() these results, propagating panics
among threads to ensure that a possibly invalid
invariant is not witnessed.
A poisoned mutex, however, does not prevent all access
to the underlying data. The PoisonError type has an
into_inner method which will return the guard that
would have otherwise been returned on a successful
lock. This allows access to the data, despite the
lock being poisoned.
The author argues that explicit unwrap() is good because it forces you to realize the value may be empty and think "Oh crap, I better handle that too." But the docs you linked say that "most usage of a mutex will simply unwrap these results" and not try to handle the case.
So it seems like the explicit unwrap() here is not preventing any bugs, it's just adding noise. Wouldn't automatic unwrap be better, at least in this case?
Exceptions, errors and bugs are all different, have different semantics and subsequebtly different outcomes. Also the author is confusing noexcept with nothrow
>> Also the author is confusing noexcept with nothrow
I'm left wondering if that was an experiment to see if anyone would notice. I don't wonder about his point; most working C++ programmers don't understand the difference either. "...yet another keyword nobody learns to use."
Hmm yeah... I think I completely agree with this, the whole point of rust (for me) is safety and less surprises - the surprises tend to be misguided ergonomics.
But is this a real danger now? Is the ergonomics initiative trying to make rust into JavaScript or Ruby? That would be unfortunate, languages should be different and the point of rust is sacrificing some ergonomic for safety and in the end more power...
The author provides some "extreme" examples of bad ergonomics, and posits there are milder examples of similarly bad ergonomics. But their examples aren't on the scale of "ergonomics", they're on the scale of "weakening language guarantees".
Instead of writing an essay with (intentional) strawman examples and then saying "trust me this could happen with any new ergonomics" is much less persuasive than just bringing up any legitimate concerns on offending ergonomic features. This essay is kind of arguing against no one and everyone at the same time: No one disagrees that automatic unwrapping is a bad idea, but it's ridiculous to say that "I reach more ergonomics by accepting more programs and hoping they do what the author have meant, so I don’t have to bother the author with a compilation error." This isn't arguing against ergonomics, it's arguing against. . . I don't know, making Rust not Rust anymore?