Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
IBM to build brain-like computers (bbc.co.uk)
21 points by timtrueman on Nov 23, 2008 | hide | past | favorite | 28 comments


A headline from the 1950s.


We know much more about the brain today.

A similar headline can be found for using "brainlike techniques" for object detection. http://science.slashdot.org/article.pl?sid=07/10/11/2214233

But the algorithms in this case are from numenta, jeff hawkin's company. The methodology is:

  1. Study the brain
  2. Come up with theories on how memory and processing in the brain work
  3. Write algorithms with the same structure.
I'd call that brain-like.


Yeah, except that Step 2 is super sketchy and arguably unscientific at this point.


I wouldn't call the thousands of researchers in the field unscientific at all. And this process is iterative. If you have an algorithm that works, the proximity to how the brain actually works is largely irrelevant.


There has been considerable progress in understanding the working of human memory in the past many decades.


Haven't these guys heard of software? I thought the only good reason to do something in hardware was speed, but maybe I'm just crazy.


The BBC article left out a critical constraint from DARPA. The final deliverable (with the "complexity of a cat's brain") isn't just a model or a simulation. It has to be a physical artifact which requires no more than two liters of volume and consumes no more than two kilowatts of power (this information comes from the DARPA BAA). So yes, prototyping will be done in software, but to reach that kind of efficiency, drastically new hardware is called for.

I'm not intimately familiar with IBM's plans, but this is one of the applications HP has lined up for the memristor technology they've been working on (HP is another one of the three prime contractors on the original DARPA grant). The benefit to the HP approach is that data and computation are both local to the applicable memristor, which is much closer to a neural system. That means no time or energy is wasted shuttling data around and the entire system state can be updated in parallel.

For an idea of why this is so exciting, keep in mind that HP plans build memristors at about a density of a trillion per square centimeter, clocked at about a kilohertz. You get the rough equivalent of one floating point operation per memristor per cycle. At this estimated manufacturing density, the expected performance of these things is on the order of a petaflop per square centimeter, drawing on the order of tens of watts. It isn't really fair to make a comparison to Von Neumann machines since the architecture is so dramatically different and so application-specific, but for certain kinds of computations these new chips will be vastly faster and more efficient.

(for the sake of disclosure, I'm working on the DARPA SyNAPSE project, but not with IBM)


How can I apply for a job working on this project?


Well, it depends. :-)

Drop me an email with some information on where you're coming from and what kinds of roles you're looking for and I may be able to point you to the right people. (bchandle at gmail)


Just out of curiousity: Is there any relationship between your project and the Blue Brain Project out of EPFL?


AFAIK, Markram (leader of Blue Brain) isn't officially connected to SyNAPSE. He's one of the external advisors for a closely related project housed out of my department, though, so you could say he has an informal link. (it's a small field)


Thanks for your response. I'm doing an internship with Blue Brain this coming Summer-Fall, so I'm trying to get the lay of the land so to speak. It's a very interesting field, and I look forward to working in it.

Best of luck with your project.


"Large-scale neural simulations are difficult for computers because neural models do not map onto the Von Neumann architecture. "

http://www.stanford.edu/group/brainsinsilicon/neurogrid.html

These guys have been doing HW neuro chips for a while. They claim to achieve performance similar to IBM supercomputers for a fraction of the price.


"Large-scale neural simulations are difficult for computers because neural models do not map onto the Von Neumann architecture. "

Maybe they should use the Harvard Architecture?

That's half a joke and half serious - a lot of DSP-type applications still use Harvard Architecture.

(ps. One of my favourite textbooks is "Computer Architecture, A Quantative Approach" by Hennessey and Patterson).


"Large-scale neural simulations are difficult for computers because neural models do not map onto the Von Neumann architecture."

I'm not sure I buy that. I can understand if speed is a big deal but I just can't imagine neural models do not map onto current computers. I'd like to see an example proving that sentence.


They use analog building blocks, not digital ones:

"Mead succeeded in mimicking ion-flow across a neuron's membrane with electron-flow through a transistor's channel.

This should not have come as a surprise: the same physical forces are at work in both cases!

A silicon neuron is an analog electronic circuit of transistors that mimic a real neuron's repertoire of ion-channels.

Instead of designing different electronic circuits to emulate each of a wide variety of ion-selective protein pores that stud neurons' membranes, as Mead did in his silicon retina, we came up with a versatile silicon model."

http://www.stanford.edu/group/brainsinsilicon/about.html


And analog circuits cannot be built/modeled in software?


It's possible, but requires a massive supercomputer to do any reasonable calculations.

Neuromorphic chips provide a way to minimize both the time and energy used in calculation by having physical chips act as synapses.


If the brain operated on the Von Neumann architecture, your thoughts would be processed tremendously slow while expending an enormous amount of energy.

Perhaps the long lost ancestors of the human had brains operating on this principle, but simply couldn't store enough energy to survive.

The brain fires off multiple synapses in multiple places in the brain at different times. If the brain worked like a Von Neumann machine, it could only fire one synapse at one place in the brain at one particular point in time.

The brain executes things in parallel while the Von Neumann is sequential. The brain is incredibly more efficient then the Von Neumann machine I'm using to type this comment on.

Also, why do you not buy this assessment?


So basically you guys are saying it's way faster, but not that it's not technically impossible to do in software on normal computers. I'm just not sure it's NOT cheaper and faster to prototype this stuff in software and then build it in hardware to make it faster once they figure it out. Prototyping in hardware seems slower and more expensive to me...

And FYI I am familiar with neural networks (and artificial neural networks, I've written a few) and I understand it's massively parallel and that neural networks the size of the human brain cannot be simulated on today's hardware but that's not the goal stated in this article. It just sounds like a bunch of guys that are really into hardware and so that's how they're going to do it, and there's nothing wrong with that. I just wanted to know if it's something that that isn't technically possible in software.


To me, it sounds like a bunch of guys who wrote a grant proposal general enough that they can do almost whatever they want after they get the money and still be within the scope of the proposal.


There are five universities across many disciplines that are working on this very challenging project. To me, it sounds like you didn't read the article.


I think you make my point for me. :)


We can't emulate complex brain models on a computer because we don't have the power. We shouldn't because it's incredible inefficient.

IBM proposes finding a hardware model or some material that can be used to properly mimic synapses in the brain. The academic term for this would be neuromorphic engineering.


Cool story, but when I saw the picture (of the cat using the computer), all I could do was think of the LOLCat caption that got cropped out.

"I can haz brain?"

"I thinkz lik computr?"

"I'm in ur nural netz, thinkin ur thotz"

It got so bad I couldn't finish the article.


I wonder, will the brain-like computers need sleep?


Oh yeah. Just like microprocessors can't be left switched on too long due to the risk of transistors melting by overheating/heat-accumulation; similarly brain-like compus will need rest, probably more that traditional comps.


I think the future is in large-scale parallel computing. They shouldn't be wasting their money on this. So working on parallel processing would be the most practical path to attaining this goal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: