Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Off-topic: if an integral is an approximation of a physical process, then that process is an approximation of the integral. I.e. we could in principle have a "chip" running a miniature lab experiment (like water flowing) with sensors that would output "an approximate solution of the Navier-Stokes equations". Numerical analysts would then write new versions of their algorithms assuming "an efficient hardware solution of N-S". Has this concept ever been used anywhere? I guess quantum computing would qualify.


> I.e. we could in principle have a "chip" running a miniature lab experiment (like water flowing)

It looks like this is a thing:

http://en.wikipedia.org/wiki/Fluidics

http://en.wikipedia.org/wiki/Water_integrator

http://en.wikipedia.org/wiki/MONIAC_Computer


Yes, but it's been largely displaced by digital computation: http://en.wikipedia.org/wiki/Analog_computer


Analog computers used to be a thing. If you want to get into this now it's actually a good time, thanks to the resurgence in modular synthesizers, which are just analog computation optimized for audio ranges. You can easily find modules specifically for math operations and so forth.


We used to use a carefully biased diode in the feedback loop of an op-amp to create logarithmic signals. I was expecting the article to dispense with digital electronics (in the core of the machine) and use tuned physical properties to actually do the calculations.

What a disappointment that the first few slides imply such a break-through and then we find that the research is simply about reducing precision to the minimum needed. And it's patented? (rolls eyes)


it's patented? 16 bit floats have been a thing for a long time.. :/

edit: never mind. somehow convinced myself that this was just 16 bit floating point arithmetic. it is not.


I wrote an IEEE single-point compatible floating point library for embedded 8088/8086 processors in the mid-to-late '80s ... my employer was very cheap!

On the other hand, I learned a ton about efficient ways to implement various algorithms and remember a great way to do square roots that involved an initial multiplication (using normalized binary FP numbers) and then converged in about six iterations.


you'd have to write a solver for this special hardware, which may see your performance disappear. double precision is usually required for N-S solvers. accounting for errors means injecting noise and averaging (probably) over multiple iterations.

to get 64 bits of precision where only 16 bits are available would be.. tricky.

that said, with 1M PEs on a chip.. i'd definitely give it a go.

source: GPU+CFD are my thing.

edit: seems there is even less than 16 bits of precision available..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: