Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Floats being more mysterious and intimidating than ints I prefer

const float main[] = {-8.10373123e+22, 6.16571324e-43, 1.58918456e-40, -7.11823707e-31, 5.81398733e-42, 1.26058568e-39, 6.72382769e-36, 2.17817833e-41, 2.16139414e-29, 1.10873646e+27, 1.76400414e+14, 1.74467096e+22, -221.039566};



Better yet, try to find the corresponding ints (or maybe more realistically shorts or chars) from usual #include headers, and use the #define or const mnemonics for all numbers.

Bonus points for finding them all in the same header file, or with like names, so as to give appearance of them actually meaning something in the context of the prank.


It doesn't translate the octal and hexadecimal constants into decimal, but you could get a first cut at that from

  cd /usr/include; egrep -r \#define.*[0-9]+$ . | sed 's/#define[\t ]//' | awk  '{print $NF,  $1}'  | sort -n


Genuine question, can you be sure the conversion wouldn't introduce a wrong bit here or there? Maybe in a different architecture or something?

I'm not that good with CPUs past 16 bits, this is really out of my comfort zone heh


Unless I'm missing something, this code is already architecture dependent .. adding more architecture dependencies won't really hurt.


I think the format for single precision and double is defined by the standard. Beyond that may be implementation dependent.


The format for floating-point is specified by the IEEE floating-point standard (or whatever it's officially called these days). C permits but does not require IEEE format. Most implementations these days use it.


You only depend on the compiler to interpret these floats correctly and generate their binary representation that decodes into valid instructions. As far as the CPU executing this code is concerned, it's machine code either way.

> Maybe in a different architecture or something?

Of course this isn't portable across CPU architectures, neither is it portable across operating systems due to at least ABI differences.


Oh, I think you misunderstood me. Sorry.

I meant like, how can you be sure a compiler would interpret them correctly and give you the exact binary value you wanted.

And by different architectures, I meant more like, if you compiled it on Intel and AMD, could the results be different? Though I guess this part of the question makes no sense now that I think more on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: