Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For correctness? What was the result?


Up until a few days ago, we were testing that x^y was accurate to 1.3 ULPs for Float16, Float32, and Float64. However, for Float16 and Float32, we were actually accurate to (at least) .51 ULP, and Float64 was accurate to 1 ULP so I made the tests stricter there. There are 2 exceptions to this: x^3 and x^-2. because people from a math background often write code with literal powers, and expect it to be fast, for small integer powers (-2, -1, 0, 1, 2, and 3) that are constant, Julia will replace the call to pow with a call to (for example) xxx for x^3. As such, the accuracy bound for x^3 is 1.5 ULP and the bound for x^-2 is 2 ULP for all data types.

This fixed a rare test failure on CI (since for ^3 and ^-2 the bounds were too tight the previous test would fail roughly 1/1000 runs), and will prevent regressions in accuracy if I ever come back to try to make the implementation faster.


Knowing adgjlsfhk1's work, yes, this would be for correctness — specifically measuring error in ULPs. Most frequently, adgjlsfhk1 pushes Julia's numeric routines to errors below 0.5 ULPs — that is, perfect correctly rounded behavior.


I actually am not a believer in perfect rounding. It tends to have a high performance cost, and IMO isn't that useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: