And anyone implementing numerical algorithms is thankful for the tremendous amount of thought put into the fp spec. The complexity is worth it and makes the code much safer.
There was actually no "thought" being put into the IEEE spec as such. It was merely a codification of the design of the Intel FPU (only one of many, very different implementations of FP units pre-standardisation). There was thought put into that implementation, but the "standard" is merely a codification of that design.
It has many many warts, and many design choices were made given the constraints of hardware of that time, not by considerations in terms of a standard.
imo they were wrong almost as much as they were right. -0.0, the plethora of NaNs, and having separate Inf and NaN all make the life of people writing algorithms a lot more annoying for very little benefit.
I think I would find it very challenging but fun. Certainly more fun than writing a date/time library (way more inconsistent cases; daylight savings time horrors; leap seconds; date jumps when moving from Julian to Gregorian) or a file system (also fun, I think, but thoroughly testing it scares me of)
When people see that binary-float-64 causes 0.1 + 0.2 != 0.3, the immediate instinct is to reach for decimal arithmetic. And then they claim that you must use decimal arithmetic for financial calculations. I would rate these statements as half-true at best. Yes, 0.1 + 0.2 = 0.3 using decimal floating-point or fixed-point arithmetic, and yes, it's bad accounting practice when summing a bunch of items and getting a total that differs from the true answer.
But decimal floats fall short in subtle ways. Here is the simplest example - sales tax. In Ontario it's 13%. If you buy two items for $0.98 each, the tax on each is $0.1274. There is no legal, interoperable mechanism to charge the customer a fractional number of cents, so you just can't do that. If you are in charge of producing an invoice, you have to decide where to perform the rounding(s). You can round the tax on each item, which is $0.13 each, so the total is ($0.98 + $0.13) × 2 = $2.22. Or you can add up all the pre-tax items ($1.98) and calculate the tax ($0.2548) and round that ($0.25), which brings the total to $0.98×2 + $0.25 = $2.21, a different amount. Not only do you have to decide where to perform rounding(s), you also have to keep track of how many extra decimal places you need. Massachusetts's sales tax is 6.25%, so that's two more decimal places. If you have discounts like "25% off", now you have another phenomenon that can introduce extra decimal places.
If you do any kind of interest calculation, you will necessarily have decimal places exploding. The simplest example is to take $100 at 10% annual interest compounded annually, which will give you $110, $121, $133.1, $146.41, $161.051, $177.1561, etc., and you will need to round eventually. Or another example is, 10% annual interest, but computed daily (so 10%/365 per day) and added to the account at the end of the month - not only is 10%/365 inexact in decimal arithmetic, but also many decimal places will be generated in the tiny interest calculations per day.
If you do anything that philosophically uses "real numbers", then decimal FP has zero advantages compared to binary FP. If you use pow(), exp(), cos(), sin(), etc. for engineering calculations, continuous interest, physics modeling, describing objects in 3D scene, etc., there will necessarily be all sorts of rational, irrational, and transcendental numbers flying around, and they will have to be approximated in one way or another.
When writing financial software, one almost always reaches for a decimal library in that language and ends up using that instead of the language's built-in floats. (Sometimes you can use ints, but you can't once you need to do things like described above.)
Overall, yes, results need to be rounded, but it's pretty much financial software 101 not to use floats.