I wouldn't call the thousands of researchers in the field unscientific at all. And this process is iterative. If you have an algorithm that works, the proximity to how the brain actually works is largely irrelevant.
The BBC article left out a critical constraint from DARPA. The final deliverable (with the "complexity of a cat's brain") isn't just a model or a simulation. It has to be a physical artifact which requires no more than two liters of volume and consumes no more than two kilowatts of power (this information comes from the DARPA BAA). So yes, prototyping will be done in software, but to reach that kind of efficiency, drastically new hardware is called for.
I'm not intimately familiar with IBM's plans, but this is one of the applications HP has lined up for the memristor technology they've been working on (HP is another one of the three prime contractors on the original DARPA grant). The benefit to the HP approach is that data and computation are both local to the applicable memristor, which is much closer to a neural system. That means no time or energy is wasted shuttling data around and the entire system state can be updated in parallel.
For an idea of why this is so exciting, keep in mind that HP plans build memristors at about a density of a trillion per square centimeter, clocked at about a kilohertz. You get the rough equivalent of one floating point operation per memristor per cycle. At this estimated manufacturing density, the expected performance of these things is on the order of a petaflop per square centimeter, drawing on the order of tens of watts. It isn't really fair to make a comparison to Von Neumann machines since the architecture is so dramatically different and so application-specific, but for certain kinds of computations these new chips will be vastly faster and more efficient.
(for the sake of disclosure, I'm working on the DARPA SyNAPSE project, but not with IBM)
Drop me an email with some information on where you're coming from and what kinds of roles you're looking for and I may be able to point you to the right people. (bchandle at gmail)
AFAIK, Markram (leader of Blue Brain) isn't officially connected to SyNAPSE. He's one of the external advisors for a closely related project housed out of my department, though, so you could say he has an informal link. (it's a small field)
Thanks for your response. I'm doing an internship with Blue Brain this coming Summer-Fall, so I'm trying to get the lay of the land so to speak. It's a very interesting field, and I look forward to working in it.
"Large-scale neural simulations are difficult for computers because neural models do not map onto the Von Neumann architecture."
I'm not sure I buy that. I can understand if speed is a big deal but I just can't imagine neural models do not map onto current computers. I'd like to see an example proving that sentence.
They use analog building blocks, not digital ones:
"Mead succeeded in mimicking ion-flow across a neuron's membrane with electron-flow through a transistor's channel.
This should not have come as a surprise: the same physical forces are at work in both cases!
A silicon neuron is an analog electronic circuit of transistors that mimic a real neuron's repertoire of ion-channels.
Instead of designing different electronic circuits to emulate each of a wide variety of ion-selective protein pores that stud neurons' membranes, as Mead did in his silicon retina, we came up with a versatile silicon model."
If the brain operated on the Von Neumann architecture, your thoughts would be processed tremendously slow while expending an enormous amount of energy.
Perhaps the long lost ancestors of the human had brains operating on this principle, but simply couldn't store enough energy to survive.
The brain fires off multiple synapses in multiple places in the brain at different times. If the brain worked like a Von Neumann machine, it could only fire one synapse at one place in the brain at one particular point in time.
The brain executes things in parallel while the Von Neumann is sequential. The brain is incredibly more efficient then the Von Neumann machine I'm using to type this comment on.
So basically you guys are saying it's way faster, but not that it's not technically impossible to do in software on normal computers. I'm just not sure it's NOT cheaper and faster to prototype this stuff in software and then build it in hardware to make it faster once they figure it out. Prototyping in hardware seems slower and more expensive to me...
And FYI I am familiar with neural networks (and artificial neural networks, I've written a few) and I understand it's massively parallel and that neural networks the size of the human brain cannot be simulated on today's hardware but that's not the goal stated in this article. It just sounds like a bunch of guys that are really into hardware and so that's how they're going to do it, and there's nothing wrong with that. I just wanted to know if it's something that that isn't technically possible in software.
To me, it sounds like a bunch of guys who wrote a grant proposal general enough that they can do almost whatever they want after they get the money and still be within the scope of the proposal.
There are five universities across many disciplines that are working on this very challenging project. To me, it sounds like you didn't read the article.
We can't emulate complex brain models on a computer because we don't have the power. We shouldn't because it's incredible inefficient.
IBM proposes finding a hardware model or some material that can be used to properly mimic synapses in the brain. The academic term for this would be neuromorphic engineering.
Oh yeah. Just like microprocessors can't be left switched on too long due to the risk of transistors melting by overheating/heat-accumulation; similarly brain-like compus will need rest, probably more that traditional comps.
I think the future is in large-scale parallel computing.
They shouldn't be wasting their money on this. So working on parallel processing would be the most practical path to attaining this goal.