Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Disclaimer: I know brain != deep learning neural nets. We do have a lot of evidence that the brain is _some_ type of network with analogue qualities.

Does it even make sense to say that a memory is stored somewhere in a specific region, if the brain is an analogue network? A property of analogue networks is that all nodes make a contribution, even if many of the contributions are infinitesimally small. The equivalent for deep learning is that information is stored in the weights and any given output is a function of all the weights. Some weights are more important than others in producing the output, but the point still stands.



If I take pretrained imagenet and just make it wider with nodes with random weights and biases, I can still feed the network an image and get a reasonably correct label as an output. In this example I could obviously point at the nodes from the original imagenet and say that those do the image recognition, we know the rest of the network doesn't contribute anything but noise.

On a technical level of course the whole network contributed to the output. In everyday reasoning and language however we usually focus on the parts that matter to a reasonable degree and ignore the rest. A sack of rice falling over in 2005 might have contributed to the 2008 financial crisis. With the world being an analog network of particles it even seems obvious that that sack of rice must have had some infinitesimal influence one way or the other, it's just more practical to ignore it.


I don't have any clear answers for you, but one interesting note here is that a recent paper showed that it took a deep neural net to be able to simulate the "IO" of a single cortical neuron. So that should give you some idea of the complexity involved compared to artificial neural nets, Re: your disclaimer.

https://www.sciencedirect.com/science/article/abs/pii/S08966...


> A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC).

> When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model.


Some types of artificial neural networks are biologically plausible. The layered structure of the cerebral cortex reflects the layers of an artificial network.


The difference being of course that as opposed to a mere 16 bit parameter, a cortical neuron is a complex "nanomechanical" machine capable of significant computation in its own right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: