Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This makes intuitive sense, but I still can't quite wrap my head around this idea of "no floating point for monetary values?," or actually, the opposite.

I'm trying to get what floating point is really good or necessary for, then. Would it be something like "huge or tiny scale, generally theoretical, scientific things?"



Floating point is good enough when you need to display something, and also for certain scientific calculations where high precision is necessary.

You only start to see errors after a while, and it is likely good enough for the calculations being done. For something like money, you need to be very exact since 0.123541234123 cents is not a real value. Every trade you might "lose some cents" eg 0.00000001 or something like that. Over billions of transactions that starts to build up, and you start to either create or lose money arbitrarily.

In finance, if you are keeping track of money you want to use integer values where at all possible.


in which scenarios is it not good enough?


Most decimal numbers cannot be represented exactly by floating point, so there is error in decimal to floating point conversion. You can see how this conversion works using this calculator: https://www.h-schmidt.net/FloatConverter/IEEE754.html

Most real numbers cannot be presented exactly, including most decimals and some larger (positive) integers.

The smallest such integer in single-precision floating point is 16777217. This number cannot be represented exactly: the next floating point number after 16777216 is 16777218. Above that number you can no longer represent whole numbers.

Even at one decimal place of precision most numbers cannot be represented exactly. For example, 0.1, 0.2, 0.3 and 0.4 cannot and 0.5 is the first positive number that can be.

At two decimal places, there are virtually no numbers that can be represented exactly (0.25 is the first one that can be), so there's always some error and therefore the possibility of rounding errors. Even if you accept that, after 131072.01, some numbers cannot be represented exactly at all even with rounding: 131072 can be represented exactly, but the next number is 131072.015625 which rounds to 131072.02. No single-precision floating point numbers round to 131072.01.

Of course I'm using single-precision floating point here. These same problems exist for double-precision but at much larger numbers: the first integer that cannot be represented in double-precision is 9007199254740993. Either way, using floating point exposes you to the risk of errors in your calculations.

Hope that helps.


thank you,

my question however was about functionally, when is it advised not to you floating point arithmetic.


As mentioned, it's when you must make sure that you have perfect precision. Money and counting items is one such example. If I am a bank and get audited, if I have a transaction that has kept rounding up, and creating eg. 1 cent every time. After 10k transactions I will have "created" $100 from nowhere. The auditing firm will not be happy about this.


Floating point is ideal for scientific applications, where the numbers are measurements/observations/approximations that have a margin of error built into them anyway, which is likely to be much larger than any additional imprecision introduced by floating point use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: