Hacker Newsnew | past | comments | ask | show | jobs | submit | Taniwha's commentslogin

If you like unreliable narration and rug pulls Nick Harkaway's novel 'The Gone-Away World' really takes the cake (and is brilliant)

Everyone who has ever had to build a floating point unit has hated it with a passion, I've watched in done from afar, and done it myself

And anyone implementing numerical algorithms is thankful for the tremendous amount of thought put into the fp spec. The complexity is worth it and makes the code much safer.

There was actually no "thought" being put into the IEEE spec as such. It was merely a codification of the design of the Intel FPU (only one of many, very different implementations of FP units pre-standardisation). There was thought put into that implementation, but the "standard" is merely a codification of that design.

It has many many warts, and many design choices were made given the constraints of hardware of that time, not by considerations in terms of a standard.



he would, but he very much designed the standard around the idea that if you wanted to implement a floating point algorithm you would hire him.

imo they were wrong almost as much as they were right. -0.0, the plethora of NaNs, and having separate Inf and NaN all make the life of people writing algorithms a lot more annoying for very little benefit.

I think I would find it very challenging but fun. Certainly more fun than writing a date/time library (way more inconsistent cases; daylight savings time horrors; leap seconds; date jumps when moving from Julian to Gregorian) or a file system (also fun, I think, but thoroughly testing it scares me of)

I just wish there were a widespread decimal-based floating point standard & units.

When people see that binary-float-64 causes 0.1 + 0.2 != 0.3, the immediate instinct is to reach for decimal arithmetic. And then they claim that you must use decimal arithmetic for financial calculations. I would rate these statements as half-true at best. Yes, 0.1 + 0.2 = 0.3 using decimal floating-point or fixed-point arithmetic, and yes, it's bad accounting practice when summing a bunch of items and getting a total that differs from the true answer.

But decimal floats fall short in subtle ways. Here is the simplest example - sales tax. In Ontario it's 13%. If you buy two items for $0.98 each, the tax on each is $0.1274. There is no legal, interoperable mechanism to charge the customer a fractional number of cents, so you just can't do that. If you are in charge of producing an invoice, you have to decide where to perform the rounding(s). You can round the tax on each item, which is $0.13 each, so the total is ($0.98 + $0.13) × 2 = $2.22. Or you can add up all the pre-tax items ($1.98) and calculate the tax ($0.2548) and round that ($0.25), which brings the total to $0.98×2 + $0.25 = $2.21, a different amount. Not only do you have to decide where to perform rounding(s), you also have to keep track of how many extra decimal places you need. Massachusetts's sales tax is 6.25%, so that's two more decimal places. If you have discounts like "25% off", now you have another phenomenon that can introduce extra decimal places.

If you do any kind of interest calculation, you will necessarily have decimal places exploding. The simplest example is to take $100 at 10% annual interest compounded annually, which will give you $110, $121, $133.1, $146.41, $161.051, $177.1561, etc., and you will need to round eventually. Or another example is, 10% annual interest, but computed daily (so 10%/365 per day) and added to the account at the end of the month - not only is 10%/365 inexact in decimal arithmetic, but also many decimal places will be generated in the tiny interest calculations per day.

If you do anything that philosophically uses "real numbers", then decimal FP has zero advantages compared to binary FP. If you use pow(), exp(), cos(), sin(), etc. for engineering calculations, continuous interest, physics modeling, describing objects in 3D scene, etc., there will necessarily be all sorts of rational, irrational, and transcendental numbers flying around, and they will have to be approximated in one way or another.


When writing financial software, one almost always reaches for a decimal library in that language and ends up using that instead of the language's built-in floats. (Sometimes you can use ints, but you can't once you need to do things like described above.)

Overall, yes, results need to be rounded, but it's pretty much financial software 101 not to use floats.


The one advantage of decimal floating point is that high schoolers have a better understanding of where decimal rounding happens.

This is legitimately a great explanation.

doesn't ieee754 define a decimal format? Specifically "decimal64"

Wearing my chip designer's hat decimal FP just means more (and slower) gates

Would it help though? IMHO, being binary is one of the least confusing sides of IEEE 754.

I'm a long time verilog user (30+ years, a dozen or so tapeouts), even written a couple of compilers so I'm intimate with the gory details of event scheduling.

Used to be in the early days that some people depended on how the original verilog interpreter ordered events, it was a silly thing (models would only run on one simulator, cause of lots of angst).

'<=' assignment fixed a lot of these problems, using it correctly means that you can model synchronous logic without caring about event ordering (at the cost of an extra copy and an extra event which can be mostly optimised away by a compiler).

In combination 'always @(*)' and '=', and assign give you reliable combinatorial logic.

In real world logic a lot of event ordering is non deterministic - one signal can appear before/after another depending on temperature all in all it's best not to design depending it if you possibly can, do it right and you don't care about event ordering, let your combinatorial circuits waggle around as their inputs change and catch the result in flops synchronously.

IMHO Verilog's main problems are that it: a) mixes flops and wires in a confusing way, and b) if you stay away from the synthesisable subset lets you do things that do depend on event ordering that can get you into trouble (but you need that sometimes to build test benches)


BTW my really big peeve about modern verilog is that it never picked up {/} as synonyms for begin/end - my experiments (20 years ago) showed that it was an easy extension, the minor syntactic ambiguities were trivally fixable


68451 or a custom SUN-like (SRAM, kind of like a PDP11) MMU, there was a guy who went around Silicon Valley in the mid 80s designing SUN-like MMUs for companies, they were all different, and some were broken (couldn't protect user space from kernel space).

68000s however had a problem: they couldn't return correctly from a page (MMU) fault (68010s fixed that) for a pre-VM (pre BSD or SVR2) UNIX world - however you could get around this with a few smarts


I think someone worked around it by running two 68000 in lock-step, or-one-step-behind or something like that.


yeah, that's rather a pain though and it effectively leaves one 68k frozen while the other services the page fault - it means you can't run another user process while the page is being read in (because it too might cause a page fault)


Of course while you're doing the next version you should knock out a tiny tapeout version, it should easily fit in a single cell (maybe 2 if you want to push the 256 byte sram in as well)


I think that what we're all waiting for for this to take off is one of the Cheap Chinese assembly houses to set up a web site where you can drag and drop chiplets into a carrier with standard interface buses, you press a button and brrr-zap robots pull chiplet die onto a substrate, bond out the interfaces and ship you a prototype same day - because you all know this is totally the future


I was at an alternative type computer unconference and someone has organised a talk about the singularity, it was in a secondary school classroom and as evening fell in a room full of geeks no one could figure out how to turn on the lights .... we concluded that the singularity probably wasn't going to happen


This sort of bug, especially in and around pipelines are always hard to find. In chips I've built we've had one guy who built a system that would build random instruction streams to try and trigger as many as we possibly could


Yeah, I think random-instruction-sequence testing is a pretty good approach to try to find the problems you didn't think of up front. I wrote a very simple tool for this years ago to help flush out bugs in QEMU: https://gitlab.com/pm215/risu

Though the bugs we were looking to catch there were definitely not the multiple-interacting-subsystems type, and more just the "corner cases in input data values in floating point instructions" variety.


I think FP needs it's own custom tests (billions of them!) - I hate building FP units, they are really the pits


Oh sure, and next you'll say a byte is 10 bits ....


The word "octet" is absolutely the kibibyte of "bits in a byte".


It’s the French word for “byte”. In France your computer has Ko/Mo/Go.


I can go along with that, mostly. When you say "octet", some old-timer with an IBM 650 can't go whining that kids these days can't even read his 7-bit emails.


From https://archive.org/details/byte-magazine-1977-02/page/n145/...:

“A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time.”


"byte" doesn't even remotely resemble any decimal prefix, so it's okay. The problem is that prefixes "kilo", "mega", etc. are supposed to be decimal prefixes, but are used as binary. And what's worse, they aren't used consistently, sometimes they really mean decimal magnitudes, sometimes they don't.


It's security theatre, someone has to pay the performers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: