Oh man, don't get me started. This was a point in a talk I gave years ago called "Please Please Help the Compiler" (what I thought was a clever cut at the conventional wisdom at the time of "Don't Try to Help the Compiler")
I work on MSVC backend. I argued pretty strenuously at the time that noexcept was costly and being marketed incorrectly. Perhaps the costs are worth it, but none the less there is a cost
The reason is simple: there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented. There is some cost to that - conceptually every noexcept function (or worse, every call to a noexcept function) is surrounded by a giant try/catch(...) block.
Yes there are optimizations here. But it's still not free
Less obvious; how does inlining work? What happens if you inline a noexcept function into a function that allows exceptions? Do we now have "regions" of noexceptness inside that function (answer: yes). How do you implement that? Again, this is implementable, but this is even harder than the whole function case, and a naive/early implementation might prohibit inlining across degrees of noexcept-ness to be correct/as-if. And guess what, this is what early versions of MSVC did, and this was our biggest problem: a problem which grew release after release as noexcept permeated the standard library.
Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.
I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.
*edit 2 also I have since added a heuristic bonus for the "inline" keyword because I could no longer stand the irony of "inline" not having anything to do with inlining
*edit 3 ok, also statements like "consider doing X if you have no security exposure" haven't held up well
> Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.
Even better, the current way of working is broken, WG21 should only discuss papers that come with a preview implementation, just like in other language ecosystems.
We have had too many features being approved with "on-paper only" designs, to be proven a bad idea when they finally got implemented, some of which removed/changed in later ISO revisions, that already prove the point this isn't working.
> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc).
Do you know if the reasoning for originally switching noexcept violations from UB to calling std::terminate was documented anywhere? The corresponding meeting minutes [0] describes the vote to change the behavior but not the reason(s). There's this bit, though:
> [Adamczyk] added that there was strong consensus that this approach did not add call overhead in quality exception handling implementations, and did not restrict optimization unnecessarily.
I think WG21 has been violently against adding additional UB to the language, because of some hacker news articles a decade ago about people being alarmed at null pointer checks being elided or things happening that didn’t match their expectation in signed int overflow or whatever. Generally it seems a view of spread that compiler implementers view undefined behavior as a license to party, that we’re generally having too much fun, and are not to be trusted.
In reality undefined behavior is useful in the sense that (like this case) it allows us to not have to write code to consider and handle certain situations - code which may make all situations slower, or allows certain optimizations to exist which work 99% of the time.
Regarding “not pan out”: I think the overhead of noexcept for the single function call case is fine, and inlining is and has always been the issue.
> I think WG21 has been violently against adding additional UB to the language, because of some hacker news articles a decade ago about people being alarmed at null pointer checks being elided or things happening that didn’t match their expectation in signed int overflow or whatever.
Huh, didn't expect the no-UB sentiment to have extended that far back!
> Regarding “not pan out”: I think the overhead of noexcept for the single function call case is fine, and inlining is and has always been the issue.
Do you know if the other major compilers also face similar issues?
Things are much better in 2024 in MSVC than they were in 2014. The overhead today is mostly the additional metadata associated with tracking the state, and most of the inline compatibilities were worked through (with a ton of work by the compiler devs). So it's a binary size issue. We've even been working on that (I remember doing work to combine adjacent identical regions, etc). Not sure what the status is in GCC/LLVM today.
I'm just a little sore about it because it was being sold as a "hey here is an optimization!" and it very much was not, at least from where I was sitting. I thought this was a very very good case of having it be UB (I think the entire class of user source annotations like this should be UB if the runtime behavior violates the user annotation)
Do you think optimizations could eventually bring the std::terminate version of noexcept near/up to par with a hypothetical UB noexcept, or do you think that at least some overhead will always be present?
Could the UB version of noexcept be provided as a compiler extension? Either a separate attribute or a compiler flag to switch the behavior would be fine.
It's kinda funny that C++ even in recent editions generally reaches for the UB gun to enable optimizations, but somehow noexcept ended up to mean "well actually, try/catch std::terminate". I bet most C++-damaged people would expect throwing in a noexcept function to simply be UB and potentially blow their heap off or something instead of being neatly defined behavior with invisible overhead.
Probably the right thing for noexcept would be to enforce a "noexcept may only call noexcept methods", but that ship has sailed. I also understand that it would necessarily create the red/green method problem, but that's sort of unavoidable.
Unless you're C++-damaged enough to assume it's one of those bullshit gaslighting "it might actually not do anything lol" premature optimization keywords, like `constexpr`.
`inline` is my favorite example of this. It's a "This does things, not what you think it does, and also it's not used for what you think it is. Don't use it".
> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.
So instead of helping programmers actually write noexcept functions, you wanted to make this an even bigger footgun than it already is? How often are there try/catch blocks that are actually elideable in real-world code? How much performance would actually be gained by doing that, versus the cost of all of the security issues that this feature would introduce?
If the compiler actually checked that noexcept code can't throw exceptions (i.e. noexcept functions were only allowed to call other noexcept functions), and the only way to get exceptions in noexcept functions was calls to C code which then calls other C++ code that throws, then I would actually agree with you that this would have been OK as UB (since anyway there are no guarantees that even perfectly written C code that gets an exception wouldn't leave your system in a bad state). But with a feature that already relies on programmer care, and can break at every upgrade of a third party library, making this UB seems far too dangerous for far too little gain.
-fno-exceptions only prevents you from calling throw. If you don't want overhead likely you want -fno-asynchronous-unwind-tables + that clang flag that specifies that extern "C" functions don't throw
I'm pretty sure I could see a roughly 10% binary size decrease in my C++ projcts just by setting -fno-exceptions, and that was for C++ code that didn't use exceptions in the first place, so there must be more to it then just forbidding throw. Last time I tinkered with this stuff was around 2017 though.
And based on a few clang discourse threads, it only removes .eh_frame
I think this only effects binary size, which I understand smaller binaries can load faster but not being able to get stacktraces for debuggers and profilers seems like a loss
> there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented
Could you elaborate on how this causes more overhead than without noexcept? The fact that something has to be done when throwing an exception is true in both cases, right?. Naively it'd seem like without noexcept, you raise the exception; and with noexcept, you call std::terminate instead. Presumably the compiler is already moving your exception throwing instructions off the happy hot path.
Very very basic test with Clang: https://godbolt.org/z/6aqWWz4Pe
Looks like both variations have similar code structure, with 1 extra instruction for noexcept.
Pick a different architecture - anything 32bit. Exception handling on 64bit windows works differently, where the overhead is in the PE headers instead of asm directly (and is in general lower). You don't have the setup and teardown in your example
Throwing exception has the same overhead in both cases. In case of noexcept function, the function has to (or used to have, depending on architecture setup an exception handling frame and remove it when leaving.
>Naively it'd seem like without noexcept, you raise the exception; and with noexcept, you call std::terminate instead
Except you may call a normal function from a noexcept function, and this function may still raise an exception.
If you're on one of the platforms with sane exception handling, it's a matter of emitting different assembly code for the landing pad so that when unwinding it calls std::terminate instead of running destructors for the local scope. Zero additional overhead. If you're on old 32-bit Microsoft Windows using MSVC 6 or something, well, you might have problems. One of the lesser ones being increased overhead for noexcept.
It's zero runtime overhead in the good case but still has an executable size overhead for functions that previously did not need to run any destructors.
Very true. Then again, if you don't need to tear down anything (ie. run destructors) during error handling you're either not doing any error handling or you're not doing any useful work.
I’m curious: where does the overhead of try/catch come from in a “zero-overhead” implementation?
Is it just that it forces the stack to be “sufficiently unwindable” in a way that might make it hard to apply optimisations that significantly alter the structure of the CFG? I could see inlining and TCO being tricky perhaps?
Or does Windows use a different implementation? Not sure if it uses the Itanium ABI or something else.
Everyone keeps scanning over the inlining issues, which I think are much larger
“Zero overhead” refers to the actual functions code gen; there are still tables and stuff that have to be updated
Our implementation of noexcept for the single function case I think is fine now. There is a single extra bit in the exception function info which is checked by the unwinder. Other than requiring exception info in cases where we otherwise wouldn’t
The inlining case has always been both more complicated and more of a problem. If your language feature inhibits inlining in any situation you have a real problem
Doesn't every function already need exception unwinding metadata? If the function is marked noexcept, then can't you write the logical equivalent of "Unwinding instructions: Don't." and the exception dispatcher can call std::terminate when it sees that?
Nah that was mostly about extern "C" functions which technically can't throw (so the noexcept runtime stuff would be optimized out) but in practice there is a ton of code marked extern "C" which throws
Well, given that qsort and bsearch take a function pointer and call it, that function pointer can easily point to a function that throws. So I think this applies to all implementations of qsort and bsearch. Especially since there is no way to mark a function pointer as noexcept.
> Especially since there is no way to mark a function pointer as noexcept.
There is, noexcept is part of the type since C++17. In fact, I prefer noexcept function pointer parameters for C library wrappers, as I don't expect most libraries written in C to deal with stack unwinding at all.
Any library implementation that is C++ compliant must implement this. I'm pretty sure that libstdc++ + glibc is compliant, assuming sane glibc compiler options.
but to me these are - again - a user induced problems. I'm interested if a user doesn't do stupid things, should they still afraid that a standard extern C code could throw? Say, std::sprintf() which if I'm not mistaken boils down to C directly? Are there cases where C std lib could throw without a "help from a user"?
I don't think anything in the C parts of the C++ standard library throws its own exceptions. However, it's not completely unreasonable for a third party C library to use a C++ library underneath, and that might propagate any exceptions that the C++ side throws. This would be especially true if the C library were designed with some kind of plugin support, and someone supplied a C++ plugin.
Well, yeah, things can be related to many things, but throwing extern "C"s was one of the motivations as I recall for 'r'. r is about a compiler optimization where we elide the runtime terminate check if we can statically "prove" a function can never throw. To prove it statically we depend on things like extern "C" functions not throwing, even though users can (and do) totally write that code.
I work on MSVC backend. I argued pretty strenuously at the time that noexcept was costly and being marketed incorrectly. Perhaps the costs are worth it, but none the less there is a cost
The reason is simple: there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented. There is some cost to that - conceptually every noexcept function (or worse, every call to a noexcept function) is surrounded by a giant try/catch(...) block.
Yes there are optimizations here. But it's still not free
Less obvious; how does inlining work? What happens if you inline a noexcept function into a function that allows exceptions? Do we now have "regions" of noexceptness inside that function (answer: yes). How do you implement that? Again, this is implementable, but this is even harder than the whole function case, and a naive/early implementation might prohibit inlining across degrees of noexcept-ness to be correct/as-if. And guess what, this is what early versions of MSVC did, and this was our biggest problem: a problem which grew release after release as noexcept permeated the standard library.
Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.
I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.