So you claim that the compiler "knows about this but doesn't optimize because of some safety measures"? As far as I remember, compilers don't optimize math expressions / brackets, probably because the order of operations might affect the precision of ints/floats, also because of complexity.
But my example is trivial (x % 2 == 0 && x % 3 == 0 is exactly the same as x % 6 == 0 for all C/C++ int), yet the compiler produced different outputs (the outputs are different and most likely is_divisible_by_6 is slower). Also what null (you mean 0?) checks are you talking about? The denominator is not null/0. Regardless, my point about not over relying on compiler optimization (especially for macro algorithms (O notation) and math expressions) remains valid.
> the order of operations might affect the precision of ints/floats
That's only the problem of floats, with ints this issue doesn't exist.
Why do you write (x % 2 == 0 && x % 3 == 0) instead of (x % 2 == 0 & x % 3 == 0), when the latter is what you think you mean?
Are you sure, that dividing by 6 is actually faster, than dividing by 2 and 3? A division operation is quite costly compared to other arithmetic and 2 and 3 are likely to have some special optimization (2 is a bitshift), which isn't necessary the case for 6.
> That's only the problem of floats, with ints this issue doesn't exist.
With ints the results can be dramatically different (often even worse than floats) even though in pure mathematics the order doesn't matter:
1 * 2 * 3 * 4 / 8 --> 3
3 * 4 / 8 * 1 * 2 --> 2
This is a trivial example, but it shows why it's extremely hard for compilers to optimize expressions and why they usually leave this task to humans.
But x % 2 == 0 && x % 3 == 0 isn't such case, swapping operands of && has no side effects, nor swapping operands of each ==.
> Are you sure, that dividing by 6 is actually faster
Compilers usually transform divisions into multiplications when the denominator is a constant.
I wrote another example in other comment but I'll write again.
I also tried this
bool is_divisible_by_15(int x) {
return x % 3 == 0 && x % 5 == 0;
}
bool is_divisible_by_15_optimal(int x) {
return x % 15 == 0;
}
is_divisible_by_15 still has a branch, while is_divisible_by_15_optimal does not
is_divisible_by_15(int):
imul eax, edi, -1431655765
add eax, 715827882
cmp eax, 1431655764
jbe .LBB0_2
xor eax, eax
ret
.LBB0_2:
imul eax, edi, -858993459
add eax, 429496729
cmp eax, 858993459
setb al
ret
is_divisible_by_15_optimal(int):
imul eax, edi, -286331153
add eax, 143165576
cmp eax, 286331153
setb al
ret
My point is that the compiler still doesn't notice that 2 functions are equivalent. Even when choosing 3 and 5 (to eliminate the questionable bit check trick for 2) the 1st function appears less optimal (more code + branch).
x % 3 == 0 is an expression without side effects (the only cases that trap on a % operator are x % 0 and INT_MIN % -1), and thus the compiler is free to speculate the expression, allowing the comparison to be converted to (x % 2 == 0) & (x % 3 == 0).
Yes, compilers will tend to convert && and || to non-short-circuiting operations when able, so as to avoid control flow.
Any number divisible by 6 will also be divisible by both 2 and 3 since 6 is divisible by 2 and 3, so the short-circuiting is inconsequential. They're bare ints, not pointers, so null isn't an issue.
https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
6.5.13, semantics