There is also another model about memory which I totally love. In that memory is not stored but brain is more like an antenna which can retrieve information from the space time coordinates where events actually took place.
Because everything is constantly moving around something each event in the universe probably has unique location, a real physical place where it happened.
So, the idea is that as we’re moving through space we’re leaving a trail as the events are (lack of better word) “printed” in the fabric of space. Then as apparently Space is not empty at all but filled with Planck scale micro wormholes entangling all things into this one universal neural network, then our brains should be very well capable of tracing back our unique trails through space and retrieve information through those micro wormholes.
As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains. That would be huge waste of resources and just damn stupid.
And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
This is quantum woo fueled neo cartesian dualist bullocks. Sorry not sorry. For starters, if events in space and any point in time are accessible as "memories", the laws of physics that enable that shouldn't constrain the only application to memory. The same physical mechanism would have to enable telepathy and even time travel. Secondly, it violates the whole "no hidden variables" results from recent Bell inequality QM experiments. Third of all, it requires long term stable room temperature entanglement basically everywhere (HAHAHA LOL NO). Fourth, it flies in the face of what we do know about how neurons compute things vis a vis signal spikes and activation potentials. Fifth, it fails to account for the numerous, reproducible experiments demonstrating the creation of erroneous memories made from whole cloth. Need I go on?
I can't believe this comment isn't buried. Its one thing to engage in a bit of rank speculation outside your specialty, but this is straight-up crackpot science.
This would be a much more valuable and informative comment if it wasn’t so wrapped in self-aware rude delivery. Saying so as someone who is prone to make that mistake.
Its one thing to make an honest mistake, its another thing to make half-baked conjectures about entire swaths of research fields where people have made careers and wrote full-baked PhD thesis's. Its frankly disrespectful to everyone's time when someone who hasn't even tried to be reasonably informed makes up some random explanation for everything and presents it as if their "insight" is worth anything whatsoever. My rudeness is on par and in response to rudeness. Failing to make bullshit feel unwelcome in the most uncertain terms is nothing more than an invitation for more bullshit.
My entire point was that you’ll make more bullshit unwelcome, if you know better, by kindly inviting more well meaning bullshitters to reconsider their bullshit than by clobbering them. It’s not quite win flies with honey but it’s a narrower application of the same principle.
I see no need to tolerate ignorant non sequiturs but I’ll be kind in the spirit of my point and ask that you re-read my prior comments and hopefully understand that’s not what I’m suggesting.
Being inflexibly and uniformly critical of all ignorance also drives people off. And the direct subject to a reply isn’t always the most important audience. Sometimes you’re persuading people who are passively reading along. If that sounds absurd in this scenario, consider the number of times a gentler rebuttal in a discussion has given you pause to reconsider some foolish idea you hadn’t expressed. If you can’t think of one, you’re either an outlier genius or maybe overconfident and overcompensating.
Either way, most people aren’t dissuaded of their ignorance by being chastised for it. Speaking as someone who’s lost important friends learning that the hard way.
The value and information of @IIAOPSW comment is just gold when you compare it to the primary one. The delivery brings additional value corresponding to the absurdity of the post.
Some times expressing left-field/weird artistic/speculative/magical thoughts acts like having a tattoo on your face, i.e., it makes condescending anti-social people more likely out theirselves, which can be both informative and entertaining.
> As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains. That would be huge waste of resources and just damn stupid.
> And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
I think this is way too simplistic. Just like planes don't fly like birds, computers don't work like "the universe"
We don't know much about the brain at all and starting to think about it in terms of "computer" logic seems very wrong to me. Computers are our very crude and poor attempt at replicating brains/intelligence, by definition they're built on ultra simplified principles that our own brains came up with, using computer terms to define the brain/universe is closing a loop that doesn't exist
It only sounds like a waste because you(we)'re constrained by our own brains, the universe doesn't care about our ability to understand or make sense of it. Something might sound unoptimised to you(us) but might be the only way allowed by physics.
Naming and defining computer parts using human/brain related terms was an error imho, it confuses people and make them think we are able to replicate (or even understand) things that we have no clue about, they're very poor analogies.
> That would be huge waste of resources and just damn stupid.
Isn't the entire universe a huge waste of resource and just damn stupid ?
Yeah I always like to think like this sometimes, but as far as we know a "reference" doesn't make sense in the brain - it doesn't have access to external stuff so in the brain context it would be like neurons connecting to a single neuron. Aside from something like a URL a computer can't reference anything outside of itself and I don't think we have a brain equivalent
Rather like some particular family of insects. Both planes (depending on their engine part) and helicopters use aero propulsion but their mechanisms differ somewhat as much as for between helicopters and birds.
> “printed” in the fabric of space. Then as apparently Space is not empty at all but filled with Planck scale micro wormholes entangling all things into this one universal neural network, then our brains should be very well capable of tracing back our unique trails through space and retrieve information through those micro wormholes.
That's fun enough if that's intended as some form of recreational speculative fiction/sci-fi. I'm all about that.
But you said "model", without qualification, so its hard for me to tell from your comment the extent to which you want to put this forward as something that stands a chance of really being true.
In Buddhist philosophy the mind is a sixth sense organ, perceiving thoughts. The Lankavatara Sutra from Mahayana buddhism goes into some detail as to how karma is formed, past life memories and the like, in the “storehouse consciousness” which is basically encoded into the world around us by our actions. I thought your post was very beautiful and similar in spirit to a lot of ancient philosophy on consciousness.
That's potentially a good starting point for a sci-fi setting.
That's unlikely to have any resemblance to how brains work though. For one, we would be lacking the mechanism to do such a 'retrieval'. We can also influence memory formation.
Say you binge drink. You are temporarily unable to form memories. We can pinpoint the exact area affected. Unless alcohol is somehow able to influence the micro-wormholes, it shouldn't affect memories at all.
> As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains.
Keep in mind that this comment is in no way even a tiny bit compatible with our current understanding of how the brain works. Treat it as a good sci-fi plot.
Yeah well that's what I did, but by sci-fi plot standards it had a huge amount of merit and originality.
This isn't a peer-reviewed journal, nor the Bible. And your comment contributes too, by pointing out it isn't consistent with Western medicine. You're helping the reader who happened to buy into that, not to try combining it with scientific mainstream things, not talk about it at the water cooler, and not talk about it for instance when being psychologically evaluated. Like even if the psych likes you if you say things like that it's out of his hands and he has to flunk you. That's the mistake Feynman made, he talks about flunking a psych exam for the military, but like full-on full F, like they used special words to describe how poor they judged his mind to be, and then he replied to them accepting the F and asking to be flunked harder on the basis it was clearly crazy to write them that letter.
And he was constantly thinking about the fabric of the universe, entertained many wild theories to arrive at sound ones, I'm sure of that.
But yeah, thanks for saying that, in case readers didn't take that comment with a grain of salt. Yours was the grain of salt.
I don't think science, and the application of the scientific method, can be fairly categorised as "western medicine". We all have shower thoughts but I keep mine to myself, at least until I can actually demonstrate something with it.
This is a very interesting idea. My criticism would be that our memories are such imperfect recollections of the real thing that its hard for me not to see them as a "copy" rather than a "pointer". Signal loss through the fabric of space?
Do you have any scientific references to back this up? I agree that there could be something to your theory.
References make sense to me as well, but I also agree with the conclusion of this article, that memories are distributed. If you put the two together, then I think it works well. Those could be distributed references, that when fired together, activate a specific memory somewhere else in space-time.
This reminds me of the the Stephen's Baxter book "the light of other days". A wormhole device was used to capture video from any spacetime coordinates. It was a nice book.
I've never really loved contextual fear conditioning as the default for "memory". Another way of describing this result is that "even a mild trauma scars the entire brain."
In all seriousness, though, I'd say that there's a broad dispute in the field between those that believe that memory involves dynamic neural activity involving a multiplexed circuit of neurons that encode many different memories and those (currently led by the Tonegawa lab) that think that memories are associated with individual neurons.
I'll look into the why a group believes memories are associated with individual neurons as my personal intuition (edit: and this article as well) tells me the opposite.
For example, if I try to remember the name of an actor, I can have hints from my brain telling me "their name starts with an _m_" and "it's a man who played in a movie from the 90's", etc.
A memory feels like it cannot be isolated, it always feels like a composition of different elements that, when put together, describe one thing, or many. The more elements (in that case maybe single neurons, or very small group of neurons) are activated the clearer the memory. A Venn diagram of sort where the overlap gets smaller and smaller. This would explain the "this makes me think of…" process, since a certain number of these elements from one memory will overlap with the elements of another one.
This is completely personal and completely unscientific. And maybe this is neuroscience 101… In that case sorry for stating the obvious.
Do we actually know how the memory is coded specifically? Is it a specific firing of a neuron (or a neuron ensemble), is it the specific answer you get from exciting the neuron(s)? Or any combination thereof?
IANA neuroscientist, so please correct me if I'm wrong or oversimplifying:
IIRC one of the main cellular mechanisms thought to underlie memory is LTP(Long Term Potentiation) in glutamatergic neurons. There are different kinds of glutamate receptors, but we're interested in the two subtypes of ion channel based(ionotropic) Glut receptors, AMPA and NMDA.
AMPA is sort of your main receptor for propagating signals: it's activated first.
NMDA is much more complicated in that it requires binding both glutamate and another neurotransmitter, glycine, for the ion channel to open. But this ion channel can also be blocked by Mg²+ ions, which for reasons that currently escape me, is removed when the neuron depolarizes. Once NMDA is open, it has the downstream effect of upregulating the AMPA receptor, making more sensitive to future transmission, hence serving as a kind of "memory" of previous signals. I think the open question is more about understanding how memory as we know it emerges out of networks of these neurons, and less about the basic cellular mechanisms. And this is probably only one mechanism of LTP, then you have its opposite, Long Term Depression, which is also involved.
Of course, in science the answer is always more complicated than what can be gleaned from the hand-wavey explanations of some programmer on HN :)
This is all true (as far as we know), but it's tricky to suss how exactly how this works in a living brain.
It's relatively easy to induce spike-timing dependent plasticity in vitro, where the background activity is low and the experimenter has almost total control over the pre/post-synaptic neurons' activity. However, in vivo neurons are often bombarded with input from thousands of synaptic partners, breaking the clear correspondence that underlies a lot of LTP/LTD rules. People have gotten them to work, after a fashion, in vivo: Yang Dan's group shifted the orientation of V1 neurons and Dan Shultz's group has some cool backwards conditioning stuff in rat barrel cortex. However, the effects are small and often require a heroic, unphysiological amount of effort, so....there must be more to it than that.
Man I wish I became a neuroscientist or something. Seems like there’s so much work to do in that field if we ever want to understand the human brain, which I have my doubts. I’d probably do it if it didn’t involve basically a vow of poverty.
My opinion is that its the evolutionary process of chemical organisation in an environment. We see gene differences but we also see many parallels in other species. Unlock every effect a chemical has and you can unlock consciousness.
Their paper assume in the premise the conclusion..
All depend on the accuracy of their fluorescent genetic memory correlate. It's not because this genetic marker is necessary for memory encoding that it is parcimonious and only affect memory encoded region.
An in depth presentation of the properties of the gene would solidify or weaken their findings, as for now since I'm lazy, I will suspend my judgment.
Their study looks not specifically at a storage site, but that along with the sites for processing a new memory for storage, and processes for recalling a memory from storage.
When you see something, that visual input is processed in a certain area that, IIRC, is the same area that is fired when recalling a visual memory.
When a memory is recalled, it is processed by the same regions that interpreted it initially upon first experiencing the stimulus. Compare that to a computer pulling a png from storage, loading into memory, calling necessary drivers to present onscreen, etc.
I remember telling some people that this is how it works back in 2003 or so.
I read a book that said the architecture of the brain was like a polyhedron where the vertexes represented different processing modes (visual, auditory, linguistic, emotional, ...) and that bundles of fibers that go down into the white matter and connect processing areas in the grey matter of the cortex.
If you think about a "dog", those connecting fibers activate images of the dog, the sounds the dog makes, the motor program to pet the dog, the feeling of the fur, etc...
Agree. Our experience of the world isn't a screen painted with what the world looks like. But instead is a collection of affordances and expectations.
Have you ever picked up an empty milk jug and yanked it super fast? Did you choose to yank it. Or did the program called pick up heavy milk jug run instead of empty milk jug. Feeling the confusion over how light the jug was causes the program "confusion" to run to help you look for an explanation about why this object is so different from what you expected.
The reasoning program creates the feeling of resolution once you arrive at, "I believed it was full but it wasn't", but prior to that resolution, if you really pay attention, you are just standing there for a moment puzzled about why your arm is moving so fast.
cells activated by naturally recalling the unpleasant memory
The maps highlighted many regions expected to participate in memory, but also many that were not
This feels like a study from decades ago, as if it were done in complete ignorance of the fact brains store event descriptors and entity descriptors separately. "Program data" and "character data" to use an analogy. Of course that sort of unselective "memory make marker go brr" analysis will show up all over the place because the brain is touching a dozen or more different types of data at once. If you want usefully specific results you have to use usefully specific methods.
Disclaimer: I know brain != deep learning neural nets. We do have a lot of evidence that the brain is _some_ type of network with analogue qualities.
Does it even make sense to say that a memory is stored somewhere in a specific region, if the brain is an analogue network? A property of analogue networks is that all nodes make a contribution, even if many of the contributions are infinitesimally small. The equivalent for deep learning is that information is stored in the weights and any given output is a function of all the weights. Some weights are more important than others in producing the output, but the point still stands.
If I take pretrained imagenet and just make it wider with nodes with random weights and biases, I can still feed the network an image and get a reasonably correct label as an output. In this example I could obviously point at the nodes from the original imagenet and say that those do the image recognition, we know the rest of the network doesn't contribute anything but noise.
On a technical level of course the whole network contributed to the output. In everyday reasoning and language however we usually focus on the parts that matter to a reasonable degree and ignore the rest. A sack of rice falling over in 2005 might have contributed to the 2008 financial crisis. With the world being an analog network of particles it even seems obvious that that sack of rice must have had some infinitesimal influence one way or the other, it's just more practical to ignore it.
I don't have any clear answers for you, but one interesting note here is that a recent paper showed that it took a deep neural net to be able to simulate the "IO" of a single cortical neuron. So that should give you some idea of the complexity involved compared to artificial neural nets, Re: your disclaimer.
> A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC).
> When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model.
Some types of artificial neural networks are biologically plausible. The layered structure of the cerebral cortex reflects the layers of an artificial network.
The difference being of course that as opposed to a mere 16 bit parameter, a cortical neuron is a complex "nanomechanical" machine capable of significant computation in its own right.
Same here. But I wouldn't use the word 'holographic', since it's now too associated with junk science and New Age crap. Even if it does happen to nicely capture the meaning.
Put simply: if you have two related concepts, say "car" and "truck", then it makes a lot more sense to me that they'd be represented by similar weights among a collection of neurons, than if each concept got its own single neuron (or handful of distinct neurons scattered throughout various functional parts of the brain.) If you did that, you'd need to explicitly encode the various similarities and differences. It would take time for priming to travel from one thing to all the related things. There would be no or limited redundancy. Forking off a concept (eg with language, learning a synonym) would have to be some explicit process.
With an aggregate representation, you get all of that automatically. When you're thinking "car", you're 70% also thinking "truck". You can evolve your understanding freely without breaking anything, you can split and merge representations, etc.
It kind of seems obvious to me, which is no proof that it's correct. I'm reading Kahneman's "Thinking, Fast and Slow" right now, and it's a brilliant book yet I feel like this representation would convert many of his puzzled observations into unavoidable consequences.
Separately, I think there's less of a distinction between memory, recall, and actual experience than this paper makes out. They feel the same because in the brain, they are (mostly) the same. And there's more of a distinction between memories of different things. I expect to find very different brain regions involved in memories of pain vs movement vs vision vs procedural knowledge etc., because much of the memory will be in the neurons directly linked to the relevant sensors or actuators. (As above, the memory is the experience, or at least it overlaps substantially.) Sure, there will be overlap across all those, but that's not because it's the One True Memory Region, it's just that there's a generic component to any memory (or rather, anything that we would refer to as "a memory".)
Because everything is constantly moving around something each event in the universe probably has unique location, a real physical place where it happened.
So, the idea is that as we’re moving through space we’re leaving a trail as the events are (lack of better word) “printed” in the fabric of space. Then as apparently Space is not empty at all but filled with Planck scale micro wormholes entangling all things into this one universal neural network, then our brains should be very well capable of tracing back our unique trails through space and retrieve information through those micro wormholes.
As developer I think it would make most sense to only save references rather than trying store all events inside everyones brains. That would be huge waste of resources and just damn stupid.
And probably the way we humans have built our computers also reflects how the universe really works, because after all we are bits of the universe doing whatever the universe is doing.
Peace