Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What the brain's wiring looks like (bbc.com)
214 points by happy-go-lucky on July 5, 2017 | hide | past | favorite | 114 comments


Just to clarify, these MRI images looks at white matter, which are mostly bundles of long-range myelinated axons. This is more likely a map of major highways in the brain (Or really the map of bundles of major highways in the brain, as MRI only has millimeter resolution).

The connectome at the cellular level is massive. I think part of the mouse visual cortical connectome has been mapped out, and the data is on the order of tens of terabytes.


The article indirectly says it's using special MRI machines with much better than millimeter resolution. So I think your first analogy is likely better than your second.


Retina has been mapped, visual cortex not likely


This might be naive, but I wonder if the essential part of mouse data could be compressed if only we understood it better.


The mouse genome is only 160 megabytes, and contains the instructions for building the brain as well as building everything else, so the "secret sauce" of how to make an intelligent brain should not be extremely large, once you figure out how to do it. :) A lot of the actual connections must be either random, or encoding things the mouse learnt while growing up.


Absolutely dead wrong. It's 2.5Gb, where Gb = Gigabases. Learn you a genomics, son.

https://www.nature.com/nature/journal/v420/n6915/full/nature...


There are four bases, so one base encodes two bits of information. Eight bits are one byte, so four bases are one byte. 2500 megabases = 625 megabytes. So yeah, Parent was off by a factor of 5-6 :) . But still, that fits on one CD.


Except that currently genomics requires even more information to be encoded - such as quality scores, allele frequencies, phase information, ... - so, depending on the format, this estimate is off by either one or two orders of magnitude still.


No need for being rude.


I read that last line as funny / tongue-in-cheek, as if every normal person learns their ATGC's as easily as their ABC's :-D


only 160 megabytes

Take a bunch of source code. Compile it, obfuscate it, compress it, and encrypt it with AES such that the result is a 160MB blob. Now see how long it takes to figure out what it does, given a computer that costs a lot of money just to load your program and a long time to give you a result. The upper bound on the complexity of DNA as it relates to the complete expression of an organism phenotype is insanely high.


Most developers think of the genome like a big load of source code, and if only we could work out where the if and for statements were we could read it. This is an extremely naive and overconfident point of view; the analogy between source code and genomes is very poor. The genome is coding for proteins (by way of RNA). Those proteins are subject to all of physics (think: electrostatics, hydrophobics, ....), whereas your code is an abstract entity designed to run on a rather simple analogue of a Turing machine. The complexity of life is much harder I am afraid. Though that never seems to stop developers assuming that they can create a crude analogy which explains it. Also, the size is totally wrong; see previous comment.


It's true that genes and proteins is nothing like code, but in the context of understanding the brain, I think that should be cause for optimism, because it means that nature has its hands tied behind its back. The genes can't just contain a description of how the brain should be wired together, because the description also has to be "self-executing"; the entire object must robustly self-assemble just from proteins physically interacting. So although 700 megabytes of mouse genes could potentially contain a lot of stuff, it might be possible to do the same thing much more simply if we can program a digital computer instead.

Like, the connectome for C. elegans has been mapped out; it's can be written down as a 2 megabyte ascii text file. Just the connectivity is not enough to actually reproduce the behavior of the worm, you would also need data about the weight of each connection, but it's still a lot less data than the worm genome (about 25 megabytes---I hope I got the number right this time!). The worm genes also need to contain a lot of additional stuff to build functioning cells internals, etc, stuff which hopefully is irrelevant to the actual cognition.


> whereas your code is an abstract entity designed to run on a rather simple analogue of a Turing machine.

I cannot adequately put the insane laugh required as response to that into text form. So I will only write this and be just as right: going by physics the brain of a mouse can be adequately approximated by a perfect sphere.


I guess we'll all have to just imagine you're right, then. Mwahahaha!


The definition of a turing machine is mathematically perfect. No threading, no IO, no error correction, no errors, no asynchronous events, no processes fighting over shared resources, no resources that might or might not disappear at the blink of an eye, in short no nothing. In that it is equivalent to a spherical brain, any complexity relevant to the problem at hand removed.


You're making things far too complex, and confusing the issue, and yourself, as a result. Let's turn to the first sentence from Wikipedia:

"A Turing machine ... manipulates symbols on a strip of tape according to a table of rules"

Whichever programming language you are fond of ultimately reduces to this mode of computation. However, with DNA, RNA, and Proteins, that is not the case. The way that we compute is simplistic compared with the way that biology computes. Thus: the crude analogy in fact hinders understanding, and should be discarded.


the "learning" can come from things like the compounds in various food and so on! "learning" is, in a generalized sense, any non-genetically-bootstrapped environment->body information transference...


The busy beaver of 1.6x10^8 can produce an awful lot of stuff.


TB of raw data but could it be structured and "compressed" logically ?


I think I remember reading that mapping an entire human connectome would require all the world's current storage capacity?


Just wondering...

How complicated does a brain have to become in order to understand itself?

Could a brain exist that is so complicated that it could not understand itself, or does a brain's understanding always scale up with its complexity?


You should first define what actually means "understanding the brain" before you try to answer this question.

Does "understanding of the brain" mean, that we can construct or simulate a brain? An artificial brain, which reproduce our intelligence and consciousness?

There is no reason to believe, that we can't do this. A brain is part of our world, governed by physics.

However, we will never be able to understand the thought process down to each individual activation of the neuron.

That's like trying to "understand" and follow the movement of each atom in my coffee next to me.


>There is no reason to believe, that we can't do this. A brain is part of our world, governed by physics.

How do you know consciousness is "governed" by our contemporary understanding of physics?


It's not a question of our contemporary understanding being sufficient to model a human brain. Rather it's that there is little or no reason to believe the brain and consciousness are pure unassailable magic. So it should be within the scope of future scientific understanding.


The brain and consciousness need not be magic for the philosophical issue to arise. The fundamental question is objectivity vs subjectivity. Consciousness is subjective, yet science is an objective affair. Can the objective be used to explain the subjective, or is the objective an abstraction from intersubjective experiences?

If it's the latter, then the project to explain everything in scientific terms is doomed, because it will always leave something out. If it's the former, then subjectivity is not fundamental, but arises from the objective world. How this is so is a difficult matter (no pun intended).

Either way, there is no magic involved, just a question of where philosophical assumptions begin. You can start with matter or the mind and see where it takes you.


Either consciousness lives in the physical constructs of the human brain or it is supernatural / "magic". If the first case is true, then there is every reason to believe it is open to the scrutiny of the scientific process. What we know for sure is that there are very many scientific questions about the brain that still remain to be answered before we have to throw up our hands and admit that it is all ineffable.

IMO science is doomed to always leave something out as you say.. it will reach a fundamental building block that simply exists, can not be expressed in terms of something else and might as well be called God. But i just haven't heard any convincing arguments that consciousness can't be understood before we reach that point.


I don't think it's just your opinion. :) I think Godel and many other logicians/mathematicians/philosophers would agree we are likely doomed to "always leave something out", particularly in the sense that we will fail to create "the perfect logic" that can explain everything. Oh the joys of living "in the system".


>The brain and consciousness need not be magic for the philosophical issue to arise. The fundamental question is objectivity vs subjectivity. Consciousness is subjective, yet science is an objective affair.

Pain is also "subjective", but we understand pain, who it develops and moves in the body, etc.

Plus, that consciousness is subjective is another way of saying we haven't mapped it specifically, so this is a kind of circular logic.

The "subjective" part is just a set of states of an objective construct (the brain, which is a physical object).


We can understand everything we'd like about how brain processes pain, but it's questionable whether we can know how that processing then becomes the subjective experience of pain.


I don't understand why this notion of subjective experience keeps being raised. Why is it important?

Once we know enough to create a synthetic brain, it really doesn't matter what its subjective experience is, we will judge it the same way we judge our fellow humans.

And this is fine since we have no way to truly know the subjective experience of say color for any other human either. Yet we don't question each others claim of consciousness.

It's enough that we can both say "blue" to name the ~ 490–450nm wavelength of light regardless of our personal subjective experience of it.


Because subjective experience is not a one way affair. It clearly plays at least some functional role. The proof of that is that we're having a conversation about it.


But the same conversation could be had with a synthetic brain. It's no more important than any other feature of brain functionality. What I don't understand is how this concern about subjectivity limits us in understanding consciousness and brain functionality.

It doesn't mean we can experience the subjective states of a synthetic brain or another human, but we can definitely replicate the physical aspects of the brain that produce them.


I think it's likely that the brain violates the Church-Turing thesis. It doesn't seem possible that a Turing machine can produce subjective/conscious experience. This strongly limits our ability to make a synthetic brain via solely computation.

Update:

I'm not sure I'd say supernatural. I think consciousness is different in kind from the rest of how we understand the universe to work. It seems likely that it's a fundamental property of the universe.

I see this all pretty much the same way that the philosopher David Chalmers does. He probably does a better job of explaining it than me. It's worth looking at his thinking on the matter.


So you believe that consciousness is supernatural and not the product of a physical brain. I don't see any reason to believe that, but it definitely makes it clear why we hold differing opinions.

Edit:

I think all that philosophy is going to have to be updated when we eventually start having conversations with synthetic brains that can tell us about their sense of consciousness ;-)

Edit 2:

If we had a device that could instantly make a carbon-copy (unintended pun) of a human (let's say me), identical down to the last atom, what you're saying is that copy would NOT be conscious. That it is not enough to capture the physical nature of someone to capture the state of consciousness.. that it is beyond physical. I just don't know how to come to terms with that idea. To me consciousness simply must be a product of our physical bodies and thus within the purview of rational study and comprehension.


That's not what I mean actually. If we could make a carbon copy of a person, it would be presumably be conscious too. But I do think that an attempt to model the brain mathematically and run a computer simulation of it would fail to create consciousness.


Ah, then we're much closer together than I thought.

However, there is no reason future computers will be limited to current (Turing) technology instead of being a collection of many co-processors with differing capabilities.

The main point though, is that since consciousness is housed in the physical brain, we can dissect, understand, and recreate fully-functional synthetic versions. While each person is in their own subjective world, the brain which is responsible for it is an objective object that is open to study.


>However, there is no reason future computers will be limited to current (Turing) technology instead of being a collection of many co-processors with differing capabilities.

That would violate the Church-Turing thesis, which is pretty widely accepted, unless the universe itself is a hypercomputer that we can leverage to perform computation.


You're essentially saying that our brains can't exist.

But they do, and we can copy them, what is the problem with making a synthetic brain? Why is the natural brain an allowed exception, but the one we make is not allowed?

Edit:

On the one hand you want to use Church-Turing to invalidate the possibility of us creating non-Turing based technology, and yet in a previous post you claimed the human brain likely violates Church-Turing. So which is it? Is reality allowed to violate Church-Turing or is it not?

If you honestly believe that the human brain violates Church-Turing, then you have to explain why you also believe that we can not copy the brain and achieve the same result ourselves.


If the universe is not computable, and if the non-computable aspects are relevant to the functioning of the human brain, then our ability to build a brain simulation is strongly limited. A functioning synthetic brain would have to leverage the same relevant natural phenomenon as a real brain; a mathematical simulation running on a standard computer would not suffice.

I do think that I actually agree with your earlier point that once we have a functioning synthetic brain, we can ignore subjective experience itself and just focus on its functional implications for the entire system.


> t doesn't seem possible that a Turing machine can produce subjective/conscious experience.

Why?


I think it's easier to see why if we use a non-conventional Turing Machine, like thousands of monks using abacuses and integrating their results by spoken word. Do remember that all Turing machines are equivalent. How in the world would the act of moving beads around create consciousness?


Define "magic". If we cannot understand something, does that make it magic?

Any sufficiently advanced technology is indistinguishable from magic.


Because the brain is composed of physical matter which obeys the laws of physics.


> Because the brain is composed of physical matter

Is consciousness?


Yes, the process that makes conciousness is also governed by physics.


Hmm. It may not be that clear cut. Many modern philosophers (and some physicists!) are playing with the idea that consciousness is primary [1]. This would mean we have everything inside out, or upside down perhaps. But the matter is way, way more interesting and complicated that your reduction would suggest :)

[1] https://en.wikipedia.org/wiki/Panpsychism#Contemporary


> the process that makes conciousness is also governed by physics.

How do you know that this is true?


Because, as far as we can tell, everything that is part of our universe is governed by physics.

Our brains are part of this universe, and conciousness is dependent on our brains.


> Because, as far as we can tell [...] conciousness is dependent on our brains.

This is neither a logical/mathematical proof, nor an empirically established fact. So how do you know? Why are you making statements of faith in what seems to be the scientific worldview, when the actual scientific worldview is that we should never make statements of faith?


It's as established empirically as we can observe. Shut off oxygen to your brain for ten minutes or put a bullet through it and determine its responsiveness thereafter.


>everything that is part of our universe is governed by physics.

Roughly 68% of the observable universe cannot be explained by physics ("dark matter").


What's the alternative?

Lets assume there is some sort of physical process involved that we don't understand or even know about (unlikely), it would still be part of the physics that govern our universe.


The alternative, according to the scientific method is: "We don't know"


> Lets assume there is some sort of physical process involved

You're begging the question. Why is "some sort of physical process" the only hypothesis space you're considering?


Whatever it is, it follows a deterministic ruleset, else there be madness. If it follows a deterministic ruleset, and we can observe it directly or indirectly, and write those rules down- theorize about those rules - we can allmost call it physics.

If the feeling of beeing a clock, winds up a spring in the clock that works a mechanism that feels like fear on beeing predictable (aka exploitable) - that does not make the clock less observable.


At the very bottom of physics, if we ever reach it with our observations and theories, there will be something that simply is, and is not explained by the interactions of some sub-system. The foundational elements of physics are probably non-conscious (assuming the currently known particles and fields are indeed non-conscious).

What grounds do we have for asserting as certain that consciousness is not a foundational thing-that-simply-is, but must definitely be composed from sub-systems that are themselves non-conscious?


Quantum physics has thrown a wrench into the 17th century Newtonian theory of the universe being totally mechanical, like a game of billiards.


Is software that runs on a cpu?


It's unfortunate that you're being downvoted. This is a classic and unsolved problem in the philosophy of mind and the people stating confidently that of course the mind is a purely physical system are ignoring many problems with that hypothesis, not least the lack of any conclusive evidence for it.


Can you really have conclusive evidence for "there is no magic"?


You can. We have a complete reductionist account for how computers work. We know there's no magic. We don't have such an account for the human brain, and it's hard to see how any explanation of brain function could account for consciousness (i.e. subjective experience). The most likely explanation in my opinion is that consciousness is a fundamental property of the universe that our minds have evolved to tap into.


Don't ask people to prove a negative.


That's rather the point.


Oh, yes, the lack of fine grained knowledge/easy understandable particle-physics allows for everyone to postulate the most fantastic theories into the percived gaps.

I for once believe that at the neuronal level, the sugar is spun into little pixies that fly around in the brain transfering information.

Dementia is occuring when somebody does no longer believe in fairys - and the little sugar elves die. So its always good to start any conversation on this with "I-do-believe-in-fairys" to keep the memory fresh.


Is admitting that we do not know so uncomfortable that you must invent fairy tale strawman arguments to cover up our ignorance? Hubris is the antithesis of science.


> A brain is part of our world, governed by physics...

Sure. But not necessary governed by current physics..


Is there any reason to doubt it?


Imagine the whole domain of physics to be P and our current knowledge to be a subset of it, P'.

Is there any reason for the operations of brain to be limited to those involving P' only?


There is. It's called quantum field theory. QFT has the peculiar property that the interactions between elementary particles in it are symmetric in space and time. What that means, is that if an exotic particle is capable of interacting with some of the commonly known everyday particles, which we know the brain is made of, we are also capable of producing that exotic particle using our common, everyday particles. That means that if exotic particles exist, they interact with the everyday particles very weakly - otherwise we would've already produced them in the LHC.

That means that even if exotic particles existed, they wouldn't have a practical effect on everyday matter, and thus, to our brain.

This all, of course, lies on the assumption that the symmetry property of QFT is correct. It's very likely to be (at least in everyday conditions, such as in a slab of fat in 37 C heat bath) – after all, QFT is the most experimentally precise scientific theory ever discovered in the history of science.


Sorry. Couldn't make sense of most of your comment. Can you explain how these exotic particles is relevant at all?


They are an example or P that doesn't belong to P', using your terms. Check this talk of the cosmologist Sean Carroll's, he adressess this problem and claims that the underlying physics of our everyday reality are completely known: https://youtu.be/xv0mKsO2goA


> However, we will never be able to understand the thought process down to each individual activation of the neuron.

Is that true in general, or are you specifically talking about human brains?


I mean a complex thought process. Like how to come from an activated retina cell to a movement to a part of a body.

At time index 586578382342us the cell no 15547349 got activated which led to a slightly improved electric charge around the axon of neuron no 68342. The connection strength, as you remember, from time index 3870127106 is now 1% higher ..... one billion pages later ..... but the atom here is slightly misaligned to its counterpart, ...... one billion pages later .... noise coming from dentride no 496839 .... one billion pages later .... let to a decreased potential at muscle cell no 9678402 and hence a millimeter shift of the finger tip no 3. ......

Do you want to here the exciting story about my semimenbranous muscle at time index 4386702121us :)

Do we understand the brain when we have this information? What I want to say is, that our brain cannot store so much information and handle it. Hence, we cannot "understand" the brain on this level. ("The person did this, because .... ")


Exactly - consciousness is clearly an emergent phenomena of these kinds (and possibly other kinds) of interactions in your brain due to the large scale.

In order to reason about the emergent phenomena, you have to "zoom out" - look at bigger pieces, not the individual atoms or neurons.

The tricky part here is that it seems that there is no particular reason for the bigger pieces to be separated into easy to reason about "modules". It could of course be the case, but I don't really see the evolutionary advantage between a "mess of wires" that works well enough and a system that is modularised.

So, if the brain is a modularised system, then there is hope of understanding it, as you can understand a computer that another commenter mentioned as well.

But, if not, then it might be the case that indeed a human brain will not be able to understand it by itself. If there is an underlying structure that's just intermixed with the other parts, then perhaps with the help of computers we can separate them logically.

There is of course the possibility that modularisation is simply not possible, because of (almost) full connectedness and interactions. Mr Penrose for example suggests the case of quantum entanglement leading to consciousness and if that really is the case, then there is essentially no way to "understand" why a certain decision was reached by the brain.


But the computer you're using is already a few billion transistors running at a few billion ticks per second. Just because the behavior is super complex doesn't mean we can't understand what gives rise to the complexity at some useful level of abstraction.


If you've spent enough time at the lowest level of gigahertz clocked hardware you know that you'll never be able to even map a single second of a modern computers execution in a lifetime. You will always have to deal with simplification and approximation, which means you might miss a crucial detail.

Finding a bug is usually starting from a very high degree of knowledge about the designed function of the system and verifying at ever higher resolution that it actually performs according to those designed parameters. That's different from trying to understand what a computer does starting from an electron micrograph of the chips and not having access to the software or the wiring diagram. That might lead to useful insights but it wouldn't be understanding.


Well, we could in principle build a computer capable of this level of understanding, and then use a brain-computer interface to augment our own understanding with the computer's...


Unless you can build a time machine. Sci-fi is foiled again!


I remember hearing someone answer "if the brain was simple enough that we could understand it, it would be so simple that we wouldn't" to that question.

Can a computer compute itself?


IC design is largely a process of simulating and specifying the next generation of integrated circuits.

So in the strict sense, not only can a computer compute itself, but it can compute its (more capable) successor generation.

Practically, this now occurs in large server farms, so it takes a large number of computers to compute themselves and/or progeny. But this is very much what happens.


So we can only understand the brain because it is complex.

The brain may be the only object with this property.


Technically, considering a long enough timescale, a brain is the Universe trying to understand itself.


A quote I remember: "Given enough time, Hydrogen will begin to understand itself".


Doesn't "try" imply intending? But the universe isn't a someone who can intend to do something.

Your statement sounds like mysticism, trying to soften the emotional coldness of the materialist view of the universe.


The space dust that coalesced into our solar system and planet eventually (picture it in fast-forward) became us and we "try" to understand stuff, no?


Yes. But the space dust didn't intend to become us. It's something, not someone.


Are the neurons and chemicals and electrical impulses in our brains someone?


collectively, yes.

so the question is, where does the line between animate and inanimate lie?


What about the brains of other animals or the nervous systems of insects, or the neurons in an octopus tentacle? Are they someone?

It’s an interesting line of questioning. What makes “someone?”

Is a living but braindead/comatose human body someone?

Many may be inclined to say yes, yet a statue or other inanimate likeness of a human wouldn’t be considered someone..

How about a thing that doesn’t resemble a human at all but gives the exact same responses to inputs that a human brain would? Say, on a computer screen?

If you were told that a real person is sitting somewhere and sending these responses, you’d say it was someone, but probably not if you were told that it’s an AI...

It’s mostly all subjective and we just pick the definition that’s convenient for us to work with and comprehend.


Questions such as these are tackled in the excellent book Infinity and the Mind [0].

[0] https://www.amazon.com/Infinity-Mind-Philosophy-Infinite-Pri...


I love these types of books. I'm about to buy this.

Is there any reason to get the most recent version (says "with a new preface by the author") over the $2 paperback?


Yes. If the author is alive and you wish to reward their huge efforts and encourage them to write more of such books, buy them new.


Put it this way. Our brains can't even answer that question.


That very much depends on whether the complexity increases intelligence/comprehension or whether it is merely increased complexity.


Right.. There are brains larger than ours with less intelligence.


Such as? We humans do colossally stupid things on a daily basis with our so-called intelligence.


Just in terms of size, elephants have larger brains than humans, and we are arguably more intelligent.


It's fun to think that this is a sufficient definition of consciousness.


Looks like any MRI. I did this from scans of my brain and than tried to follow the visual cortext to my eyes. I couldn't, because the scans only show the general direction water flows in, but the nerves behind the eye cross and at this point you can't tell what is just noise and what are interlaced nerves.

Would be interesting to know how much better this model is.


Just to clarify, MRI scanner at Cardiff is Siemens Connectom A, a specialized research 3T MRI scanner for which the main designs were part of Human Connectome Project.

It is not exactly "hot news", as it is already in research use for several years already (in the US, at MGH). It is, still, however, pretty unique in a way that it was optimized for human in-vivo diffusion MRI (tracing of white matter pathways in alive human subjects :-).

Diffusion MRI scans which can be obtained by this scanner are of exceptionally high quality and resolution (spatial and angular), and contain lots of data that can be fed to mathematical models to infer fiber crossings, etc.

However, despite the high spatial and angular resolution it is still at least an order of magnitude short of what is necessary to capture local axonal connections and this type of imaging certainly does not tell you the direction the signal is flowing.

Still an important progress IMO.


It says it's 7T in the article?


7T is the Nottingham University scanner. Cardiff is 3T Connectom A. These are different sites / scans in the article.

In fact, 3T Connectom A is actually more optimized for Diffusion MRI / white matter tract tracing compared to the available 7T scanners due to its enormous 300 mT/m gradient.

In theory, the best of both worlds would be a 7T+ scanner with ultra strong gradient system such as the one developed by the HCP team, but such device does not exist yet AFAIK.


The sheer scale of connectivity of the human brain boggles. An interesting read on neuromorphic computing's attempt to build systems informed by the biological brain https://semiengineering.com/neuromorphic-computing-modeling-...


Where is the neuron cell body? Would they be at the outside where the fibers (axons?) seem to terminate?

I wouldn't have guessed that it would look so linear. I was expecting a much more tangled appearance.


The tangled part is actually mostly 2mm thickness on the surface[0]. That's why the folds matter so much:

[0] https://28oa9i1t08037ue3m1l0i861-wpengine.netdna-ssl.com/wp-...

Check out this article for a pretty cool overview of the brain: https://waitbutwhy.com/2017/04/neuralink.html


Thank you! Great links.


It's a tree ;-)


Trees aren't brains.

Forests are brains.


Funny, it looks about as complicated as I would have imagined it.


The real challenge here is to understand the brain, and consciousness, without putting it in computer terminology.

https://aeon.co/essays/your-brain-does-not-process-informati...


Sorry, but the author of that article clearly does not understand... It doesn't matter if humans store data the way computers do, the point is, that humans can be abstracted as state machines.

Yes, humans are certainly different than our PCs today, but that does not mean that the principles are so different. E.g. humans seem to use a lot of lossy 'compression' while that is something we try to avoid in software in recent years. So we use computers for the things humans cannot do very well and therefore it appears that we are different.

I think the whole discussion about putting the human mind into computer terminology is more about accepting that humans are just biological machines versus believing that humans are special because they have a soul and are different than any other animal.


Alternatively, it's a seductive metaphor because humans love to try to explain one thing in terms of another. Thus the universe is like a clock, and the laws of physics operate like clockwork. Until QM and the metaphor broke down.

Similarly, the brain is like a computer. Until it's not.

Interesting how nobody runs it the other way. A computer is like the brain. Why don't we? Probably because we understand computers a lot better than brains, and use that analogy to make sense of something otherwise mysterious.

None of this has anything to do with a soul, btw. It's all a matter of how we think about things.

The mistake is when we take our metaphors literally and try to make domain B fit into domain A, despite obvious differences. Then we end up with a convoluted view of B that's been distorted by being mapped onto A.


The point, as I see it, is that we are trapped in our own paradigmatic understanding of the world, and use the most advanced metaphors we have at our disposal to conceptualize something that is still complex beyond our reach. Hilarity ensues years later when we see the body understood as steam pumps or engine valves etc (described by Zarkadakis in the essay). The challenge for us in 2017 is to not fall into the same trap and to understand consciousness in a deeper way that isn't through the lens of "wires, circuitboards, central processors, data, etc etc" Its enticing but it detracts from understanding the consciousness of all living things in a more advanced way. I am with you on your materialist presupposition, I just think we can do better. I think the de-simplification of these terminologies will actually lead to better augmented reality and better AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: