This has been a sore point in a lot of discussions regarding compiler optimizations and cryptographic code, how compilers and compiler engineers are sabotaging the efforts of cryptographers in making sure there are no side-channels in their code. The issue has never been the compiler, and has always been the language: there was never a way to express the right intention from within C (or most other languages, really).
This primitive we're trying to introduce is meant to make up for this shortcoming without having to introduce additional rules in the standard.
There really ought to be a subset of C that lets you write portable assembly. One where only a defined set of optimisations are allowed and required to be performed, "inline" means always inline, the "register" and "auto" keywords have their original meanings, every stack variable is allocated unless otherwise indicated, every expression has defined evaluation order, every read/write from/to an address is carried out, nothing is ever reordered, and undefined behaviour is switched to machine-specific behaviour. Currently if you need that level of control, your only option is writing it in assembly, which gets painful when you need to support multiple architectures, or want fancy features like autocomplete or structs and functions.
> want fancy features like autocomplete or structs and functions
I would argue that given a certain ISA, it's probably easier to write an autocomplete extension for assembly targeting that ISA, rather than autocomplete for C, or goodness forbid, C++.
Likewise for structs, functions, jump targets, etc. One could probably set up snippets corresponding to different sorts of conditional execution—loops, if/else/while, switch, etc.
Because for timing-sensitive code, those are important. If a variable is really a register, cache-based timing attacks just don't happen, because there is no cache in between.
>how compilers and compiler engineers are sabotaging the efforts of cryptographers
I'm not exposed to this space very often, so maybe you or someone else could give me some context. "Sabotage" is a deliberate effort to ruin/hinder something. Are compiler engineers deliberately hindering the efforts of cryptographers? If yes... is there a reason why? Some long-running feud or something?
Or, through the course of their efforts to make compilers faster/etc, are cryptographers just getting the "short end of the stick" so to speak? Perhaps forgotten about because the number of cryptographers is dwarfed by the number of non-cryptographers? (Or any other explanation that I'm unaware of?)
It's more a viewpoint thing. Any construct cryptographers find that runs in constant time is something that could be optimized to run faster for non-cryptographic code. Constant-time constructs essentially are optimizer bug reports. There is always the danger that by popularizing a technique you are drawing the attention of a compiler contributor who wants to speed up a benchmark of that same construct in non-cryptographic code. So maybe it's not intended as sabotage, but it can sure feel that way when everything you do is explicitly targeted to be changed after you do it.
It’s not intentional. The motivations of CPU designers, compiler writers, and optimizers are at odds with those of cryptographers. The former want to use every trick possible to squeeze out additional performance in the most common cases, while the latter absolutely require indistinguishable performance across all possibilities.
CPUs love to do branch prediction to have computation already performed in the case where it guesses the branch correctly, but cryptographic code needs equal performance no matter the input.
When a programmer asks for some register or memory location to be zeroed, they generally just want to be able to use a zero in some later operation and so it doesn’t really matter that a previous value was really overwritten. When a cryptographer does, they generally are trying to make it impossible to read the previous value. And they want to be able to have some guarantee that it wasn’t implicitly copied somewhere else in the interim.
Since the sibling comment is dead and thus I can’t reply to it: Search for “unintentional sabotage”, which should illustrate the usage. Despite appearances, it isn’t an oxymoron. See also meaning 3a on https://www.merriam-webster.com/dictionary/sabotage.
Every dictionary I've looked at, wikipedia, etc. all immediately and prominently highlight the intent part. It really seems like the defining characteristic of "sabotage" vs. other similar verbs. But, language is weird, so, ¯\_(ツ)_/¯.
As compiler have become more sophisticated, and hardware architecture more complicated, there are been a growing sentiment that some of the code transformation done by modern compiler make the code hard to reason about and to predict.
A lot of software engineer are seeing this as compiler engineer only caring about performance as opposed to other aspect such as debuggability, safety, compile time and productivity etc... I think that's where the "sabotage" comes from. Basically the focus on performance at the detriment of other things.
My 2 cents : The core problem is programmers expecting invariant and properties not defined in the languange standard. The compiler only garanty things as defined in the standard, expecting anything else is problematic.
I don't think it's nefarious but it is sabotage. There's long been an implicit assumption that optimization should be more important than safety.
Yes, languages do lack good mechanisms to mark variables or sections as needing constant-time operation ... but compiler maintainers could have taken the view that that means all code should be compiled that way. Now instead we're marking data and section as "secret" so that they can be left unoptimized. But why not the other way around?
I understand how we get here; speed and size are trivial to measure and they each result in real-world cost savings. I don't think any maintainer could withstand this pressure. But it's still deliberate.
> Now instead we're marking data and section as "secret" so that they can be left unoptimized. But why not the other way around?
Worse cost-benefit tradeoff, perhaps? I'd imagine the amount of code that cares more about size/speed than constant-time operation far outnumbers the amount of code which prioritizes the opposite, and given the real-world benefits you mention and the relative newness of concerns about timing attacks I think it makes sense that compiler writers have defaulted to performance over constant-time performance.
In addition, I think a complicating factor is that compilers can't infer intent from code. The exact same pattern may be used in both performance- and timing-sensitive code, so absent some external signal the compiler has to choose whether it prioritizes speed or timing. If you think more code will benefit from speed than timing, then that is a reasonable default to go with.
This! Or, if you don't have geothermal heating but have an electric water heater, maybe temporarily increase the temperature it goes to: maybe it's normally set to go to 65C, then when you detect that you have negative prices and your batteries are full and your water already hot, maybe heat the water to 70C and store that little bit of extra energy as heat! If you have thermostatic valves in your bathrooms, you won't even notice the difference except by the fact that your water heater now can apparently hold a little bit more water than usual :)
I have a heat pump for hot water and calculated this with an offered floating energy tariff.
It is not economical because the high net tariffs are not floating but fixed per kwH and negative / very low prices are seldom here and only for a short period of time available.
Assuming regular negatives (more than once a day) you could also tie the heating to the grid prices with maybe an hour buffer around your high water usage times to make sure you are up to temp.
Modern water heaters will keep temp for a shockingly long period of time.
Oh wow, I actually hadn’t heard of Boo until now — thanks for sharing it! Just took a quick look and yeah, I can definitely see some parallels. W++ wasn’t directly inspired by it, but I guess we both arrived at similar goals: a Python-style language that plays well with .NET.
This was by the same author as UnityScript, wasn't it? IIRC the latter was mostly just Boo in a JS trenchcoat, which is why both were supported along with C#.
This primitive we're trying to introduce is meant to make up for this shortcoming without having to introduce additional rules in the standard.