Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always found calculator applications to be quite funny. To use a calculator application while sitting at a computer is at least a bit ironic.


The original intention of computers was to perform calculations. The inclusion of facilities that allow for calculations to be performed on a computer is ironic how?

That seems like thinking it ironic that lawn mowers have a place to put blades for cutting grass.


The purpose of computers is to perform calculations. Then we built a massive pile of abstractions on top of that souped-up calculator to make it into a general purpose machine. Then we built a small calculator app on top of those abstractions. A lot of work to get back to square one.

(This obviously misses the point that we made computers usable by mostly anybody along the way, but it’s still funny)


As someone who sat alongside when my granddad soldered 8080 clone to the board, entered commands via switches, and seen binary results on a LED display (not a modern LED display, but literally a row of 8 red Light Emitting Diodes, you know) I tell you with a great confidence that clacking 1+2x3 on a keyboard and seeing an answer on SVGA+ screen is much quicker and more convenient than:

  - planning a program
  - implementing missing mul and/or shl instructions in your mind
  - managing register pressure for more complex expressions
  - having to start from scratch on a mistake
  - debugging with a multimeter
Even if you think of MBR-style calculator, there is at least dozen kb BIOS and dozen kb VGA BIOS to start with a blank screen that can do cursor and digits. There is nothing cheap or straightforward down there, and the first thing people did was abstracting baremetal away ASAFP.


One way of thinking of it is that a physical calculator has two parts - the circuitry that performs the calculations and the UI - in the form of physical buttons, displays, and the connections between them and the calculation circuitry - that allows people to effectively make use of it for their calculating needs.

A modern CPU has got the calculation part covered by itself, but it still needs the UI. Of course there are other kinds of UI for people to do calculations - for example Excel, or programming language REPLs - but desk calculators had pretty good UI for some use cases, so it makes sense to have an option based on them.


That only seems to explain why it could seem odd to buy an entire desktop PC to only ever use it as a basic calculator. In that case, it might make more sense to just buy a calculator. But there's certainly nothing odd about buying a general purpose computer and then use it for lots of different specific purposes, one of which is numerical calculation.


I agree with the parent because creating code which runs arbitrary calculations is quite weird. You would assume you could just send instructions to the CPU and print the output.

The idea that there’s a huge wrapper around the CPUs core functionality is indeed weird because you would expect that functionality to be available without any program at all.


> That seems like thinking it ironic that lawn mowers have a place to put blades for cutting grass.

A more apt comparison would be having a 40ft tall Mech that transforms into a jet and flies from neighborhood to neighborhood then crouches down and uses a tiny pair of scissors to cut the grass.


Given computers are nothing but a mind-bogging amount of very simple operations (add, mov, whatever on fixed size values), I'd suggest an equally apt comparison is that a computer is billions of nanobots cutting grass one blade at a time, and the calculator app is the Mech.


I suppose what he meant is it's ironic due to the insane overhead. When the end-user double clicks calc.exe, types in 1+1, and sees the result on the screen, the CPU has executed on the order of tens of millions of instructions (filesystem code, OS code, UI code, display driver code, etc) to show a simple result that could have been done in a single x86-64 ADD instruction.


Consider what happens when you type in "= 1 + 1" in the Google search bar then.


shudders ;)


The irony is more so the unnecessary skeuomorphism, that, inside of a machine that is a strict superset in capabilities of another, simulates the limited interface of the latter machine to make it “look and feel” as though it be an actual calculator.

Obviously, a prompt that accepts arbitrarily complicated mathematical expressions and returns a value is a far superior interface than clicking on buttons with a mouse, but skeuomorphisms in design are very common place, in spite of their lesser efficiency.


> That seems like thinking it ironic that lawn mowers have a place to put blades for cutting grass.

A more apt analogy in this case would be a car having pedals.


This might be the Sheri an version of ironic, which includes anything a bit funny, or weird, or they can’t think of a term for. Like rain on your wedding day.


I think your parent comment meant computers are calculators themselves since forever.


Why is that ironic? The obvious use case for a computer is to, you know, compute things, and the most obvious way to do that is to use a program designed for that very purpose.


Because a computer is obviously more powerful than a desk calculator (unless you happen to have a programmable calculator, which would itself be a computer.)

Transposing the metaphor of a desk calculator verbatim is inefficient and unproductive, graphical gimmick exclusively. Unix bc, which predates Windows calculator by a couple decades, is Turing-complete and in fact close to a full-featured modern dynamic language.

Today's equivalent would be a {python|ruby|node|perl} REPL.


The UX of a REPL sucks if you're not an expert. A calculator UI gives you a lot more guidance about what's available.

Accessibility is everything. I would guess the skeuomorphic calculator apps have 5 orders of magnitude more users than bc, and 4 more than the REPLs (not that the REPLs were even built for this purpose).

I think they will evolve over time away from skeuomorphism, but not into anything resembling a plain REPL.


Excel has the same accessibility as a calculator UI. I'll have that, in a smaller window :)

Also, the UX of a REPL sucks because it lives in a white-on-black terminal and prints scary version messages about a thing called "clang". Do the same thing with nice colors and friendly messages and I doubt the usability would be any worse.


This is precisely what I'm trying to illustrate. No, Excel does not have the same accessibility as a calculator UI. Excel is somewhere in between a calculator UI and bc in terms of how discoverable and usable it is.

Engineers fall into this trap a lot - projecting our own preferences onto what we think other people should find accessible. Most people require training before they can use Excel. The training required to use a calculator is so minimal, they teach it in elementary school. And a REPL is gibberish to the vast majority of people, even if you dress it up with nice colors and friendly messages (which to be fair help a lot).


To explain the irony for OP: A computer is or has been basically a fancy calculater, so putting a program called calculator on it, seems redundant. It would make more sense to call it e.g. algebra and have other programs called geometry and so on.


I think the issue is more that calculator apps are one of the last strongholds of skeuomorphism. Why does almost every calculator app have a keypad, even those for non-touch OSes?

For a refreshingly modern take on calculator apps, take a look at something like Soulver/Numi/Calca which use a notebook-style interface, or SpeedCrunch which uses a REPL-style interface (and which does have a keypad, but you can turn it off).

If the idea of a calculator app just seems ugly to your brain, PowerShell does floating-point math and is scriptable to boot. I've used it as a calculator in the past, though it means having to put up with some wonky syntax.


> Why does almost every calculator app have a keypad, even those for non-touch OSes?

How would a non technical person do a square root or exponent? It's not obvious to type sqrt(2) or 3^4 into a text field, and people shouldn't have to read a manual to use a calculator. The default calculator is meant for the average person, and advanced tools are available for the rest.


It's been done with Mathematica, which includes a "basic" toolbar with square roots, integrals and the like. There's plenty of room for middle ground between a REPL and emulating a physical calculator.


The thing that makes me giggle is how it also happens to be the most primal function of a computer.

I'm sure all tools you suggest are great, but as someone who can program, I can just use the REPL of whatever programming language I'm using at the moment. I have no need for a dedicated tool in the first place.


Agreed. Operating systems turn computers into something more like filing systems. It's a shame because computers are actually really useful too!


Floating point implementations vary among programming languages. On the same machine and OS (win10), the equation 0.3 - 0.1 equals varying numbers depending on the calculator;

Powershell says 0.2,

calc.exe app and libreoffice calc agree with 0.2,

BC running in CYGWin also 0.2,

Python 2 and 3 answer 0.19999999999999998,

JS in Vivaldi and firefox also answer 0.19999999999999998,

But Portacle (Common Lisp) returns 0.20000002


Not sure how this is related, but:

- if you get 0.2, the code is using rational numbers or a custom implementation

- 0.199999...8 is the double precision IEEE subtraction.

- 0.200000...2 is the single precision IEEE subtraction.

This looks like a typecasting/coercion issue. You can likely get all of the above in C++ or Java by using different explicit casts.


> if you get 0.2, the code is using rational numbers or a custom implementation

IIRC, there are languages that use IEEE double precision but the way they handle default display presents this as 0.2 (basically, they use a display algorithm that displays the shortest decimal expression that has the same double precision representation.)


Using doubles in Java you get 0.199999..8, using BigDecimal you get the correct 0.2.

    jshell> new BigDecimal("0.3").subtract(new BigDecimal("0.1"))
    $8 ==> 0.2


Yeah, and using float literals you get 0.20000...2:

   jshell> .3f-.1f
   $1 ==> 0.20000002
We got ourselves a hat trick :)


0.2 could just be base 10 / decimal floating point rather than the base 2 single/double i think, It’s all ieee754 it’s just the base10 stuff came later


    In [1]: from decimal import Decimal as D

    In [2]: D('0.3') - D('0.2')
    Out[2]: Decimal('0.1')


And then there are PLs that adopt the incredibly radical technique of treating 0.2 as two tenths[1].

[1] https://medium.com/@raiph_mellor/fixed-point-is-still-an-app...


As far as I know, BC doesn't do floating point arithmetic. It does fixed point arithmetic, built on top of arbitrary-precision integer arithmetic. Which is why it won't give you one of those funny floating point answers. It's what a human would do on a piece of paper.

Floating point implementations can vary not only between languages, but also between different CPUs, if the language relies on the hardware implementation (which is the smart thing to do in most cases).

As a side note: Python's implementation of FP isn't standard-compliant. E.g. when you divide by 5.0 by 0.0, the standard says you should get a +inf. In Python you get an exception.


Actually, according to the standard, you should get either +inf or an exception. Both behaviors are valid. Nevertheless, according to the standard, the user should be able to choose between getting +inf and getting an exception. If in Python there is no way to mask the divide by zero exception, then that is not standard-compliant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: