Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Bitwise conversion of doubles using only FP multiplication and addition (2020) (dougallj.wordpress.com)
50 points by vitaut 17 hours ago | hide | past | favorite | 6 comments




Recommended readings:

Jim McCann, Tom Murphy VII, The fluint8 Software Integer Library. https://tom7.org/papers/fluint.pdf

Tom Murphy VII, GradIEEEnt half decent. https://tom7.org/grad/murphy2023grad.pdf


Nice.

Also Tom Murphy the Seventh.

Odds on his child's name being Tom?


I love these "what if you only had X" puzzles. The constraint here (no bit access, only FP multiply and add) sounds impossible until you realize rounding behavior carries information.

The edge cases around negative zero and infinities make sense. Those values break the mathematical properties you'd need to distinguish them.


Interesting. When compiling Scala.js to ECMAScript 5, we still have an implementation of bitwise floating point conversions based on double operations and integer shifts. [1] We also use a lookup table for powers of 2, and don't use anything but primitives (no log or pow, notably). We do have a few divisions, but I see now how I could turn them into multiplications. Dealing with subnormals is tricky because the inverse of the subnormal powers of 2 are not representable.

We have one loop: a binary search in the table of powers of 2 for the double-to-bits conversion. It has a fixed number of iterations. I had tried to unroll it, but that did not perform any better.

I'll have to dig more to understand how they got rid of the comparisons, though.

I wonder whether their implementation would be faster than ours. At the time I wrote our conversions, they beat every previous implementation I was aware of hands down.

[1] https://github.com/scala-js/scala-js/blob/v1.20.2/linker-pri...


Hi, author here. My version definitely shouldn't be faster unless something very weird is going on with the runtime (though I think with the benefit of hindsight some further optimisation of it is possible). I have never seen a good use for this, aside from as a proof that it is possible, but I can imagine it coming up if, say, you wanted to write an exploit for an esoteric programming language runtime.

If you still maintain this code and want to optimise it, I don't think you should need a full powers-of-two table, just having log(n) powers of two should do in a pattern like:

  if (v > 2**1024) { v *= 2**-1024; e += 1024; }
  if (v > 2**512) { v *= 2**-512; e += 512; }
  ...
That's a straightforward memory saving and also leaves v normalised, so gives you your fraction bits with a single multiplication or division. This is a little less simple than I'm making it look, because in reality you end up moving v to near the subnormal range, or having to use a different code path if v < 1 vs if v >= 2 or something. But otherwise, yeah, the code looks good.

Thanks for the feedback, and congrats on your achievement.

We do still maintain this code, although it is deprecated now.

Even with the unrolled tests, we would still keep the table for the decoding operation, I believe. But it's true that it would at the same time provide the normalized value. That could be beneficial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: