Better yet, try to find the corresponding ints (or maybe more realistically shorts or chars) from usual #include headers, and use the #define or const mnemonics for all numbers.
Bonus points for finding them all in the same header file, or with like names, so as to give appearance of them actually meaning something in the context of the prank.
The format for floating-point is specified by the IEEE floating-point standard (or whatever it's officially called these days). C permits but does not require IEEE format. Most implementations these days use it.
You only depend on the compiler to interpret these floats correctly and generate their binary representation that decodes into valid instructions. As far as the CPU executing this code is concerned, it's machine code either way.
> Maybe in a different architecture or something?
Of course this isn't portable across CPU architectures, neither is it portable across operating systems due to at least ABI differences.
I meant like, how can you be sure a compiler would interpret them correctly and give you the exact binary value you wanted.
And by different architectures, I meant more like, if you compiled it on Intel and AMD, could the results be different? Though I guess this part of the question makes no sense now that I think more on it.
const float main[] = {-8.10373123e+22, 6.16571324e-43, 1.58918456e-40, -7.11823707e-31, 5.81398733e-42, 1.26058568e-39, 6.72382769e-36, 2.17817833e-41, 2.16139414e-29, 1.10873646e+27, 1.76400414e+14, 1.74467096e+22, -221.039566};