Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scientists 'Inject' Information into Monkeys’ Brains (nytimes.com)
99 points by flinner on Dec 8, 2017 | hide | past | favorite | 85 comments


Summary: Two monkeys were able to learn to perform specific movements in response to microelectrode stimulation at corresponding (arbitrarily chosen) loci in the premotor cortex. The stimulation was too low to directly drive muscle movements.

The brain can learn to recognize signals from cochlear implants and other neuroprosthetics, and this study shows that monkeys can also learn to recognize signals from deeper implants. Note that previous studies have suggested that the premotor cortex has a role in associating stimuli and movements: https://en.wikipedia.org/wiki/Premotor_cortex#PMDr(F7)

The researchers speculate that the stimulation "may have evoked somatosensory and/or visual percepts, desires to move particular body parts, or other internal urges or thoughts, any of which the monkeys could have used as instructions".


How long did the experiment last for? Were they able to replicate the results in the same monkeys at a later date, or was this only for a month or so?


They only say that they performed nearly 200 sessions per monkey, where each session involves multiple movement task trials. You would have to contact the author for details: mschiebe@ur.rochester.edu

They don't report testing whether the learned associations persist, although that would certainly be interesting.


I was more interested in the 'decay' of the electrodes over months, i.e. can these be chronic implants that work. Still, thanks for the info!


I think the title is misleading. Quite highly. They didn't inject information.

They injected stimuli for some sensation, and monkeys learned how to interpret it.

So, what they did is they gave monkeys spidey senses, which they could trigger. And would make equally catchy title.


it seams that the brain game is starting, computer/brain interaction here we go :d


Pretty soon companies will only want to hire developers augmented with brain interfaces. And why wouldn’t they? Such developers could build entire systems at the speed of thought.


> could build entire systems at the speed of thought

That's kind of of the problem, you;re limited by the speed of though when coding. For most things I can type waaaay faster than I can think (a faster keyboard wouldn't help, just as neural output interface wouldn't help much either), and information can be displayed on a computer screen waaay faster than I can read it (so a higher bandwith neural input interface wouldn't help either).

What would truly help would need to be much more complex than that, like something that takes input from something, processes it, then spits the output to other part of the brain, hopefully in a way synced with current visual perception. Or takes input from one part of the brain, processes it through some algorithm, then spits output to another one.

My point is that there's a lot to be gained... but I don't think that trivial implementations of BCIs would bring much gain to highly thinking-limited tasks, so the first generation of them would not be of much use to philosophers, software engineers or mathematicians. Probably the first iterations will help most people fighter/drone-pilots or surgeons, who are truly limited by interfaces-bottlenecks, not thinking bottlenecks.


I don't think at the speed of thought means you need to verbalize in your head each bit of your code. You can perhaps visualize it. I can think of a snippet of a for loop a lot quicker than I can make the individual declarations of that loop.


You need to write code, to be read possibly by people without brain implants.


Hmm. That actually opens up a solution: remove the requirement you mention, and we might have sizeable benefits from 1st gen implants in jobs like coding too, we'd just need to switch/translate to languages specifically designed to the make the bes of the implant's particular characteristic.

Though that would soon turn into the mother of all "technical" debt issues...


You type faster than you think? Then who is doing the typing...


I'd interpret it as "once I have planned what I want to type, I can type it faster than I can come up with new things to type." So you always end up pausing occasionally to gather your thoughts between typing.


Are you being ironic? This won't be available to the public, much less even invented for at least a generation or two.


And the developers will have been replaced by AIs by then.


Exactly this. The order in which technological advances will take place as we approach whatever the "singularity" will be will not match our intuition at all!

Imho the future will surprise the shit out of us, despite our ability to anticipate some of the advances... The order shit will happen in will surprise everyone and lead to really unexpected events.

Hopefully we can handle at least some of the things right and steer stuff towards a "successful merge" ( https://medium.com/wordsthatmatter/merge-now-430c6d89d1fe )


It better get sorted out before the oceans turn to acid and everything south of Greenland becomes uninhabitable.


I occasionally look at my editor and know the exact changes or code I’d like to write, and feel a small pang of annoyance that I can visualize it perfectly yet still have to type it out.


Occasionally... unless you're either a genius or you work in highly verbose language and with horrible tooling, you likely spend the other 90% of the time staring at a screen full of code you've already loaded into your brain and thinking stuff like "wtf, dude, wtf, wtf does THAT do, why is that wired to that, how tf does that work, shit, that makes no f sense" :)

So that "miracle" interface would give you, like, what, a 4x increase on 10%, that's like a +4% productivity, in its first iteration, coupled with costs and risks that would make it a really tough sell. We're going to need to go pretty far with the nano stuff to get a good (cost~risk)/(bandwith~latency) out of it. Hopefully there are people working on it and people pouring money into it...


Actually 4x 10% is 40%.


By 4x "increase" in 10%. he means you'd become 4x faster at things that take 10% of your time currently (the other 90% corresponds to the "coming up with solution" / "wtf" moments).

That means for a job that takes 10 + 90 units of time, the 10 section would be done in 2.5, for a total effort of 92.5 instead of 100. That's an improvement of 100/92.5 - 1 = ~8.1% in productivity.


THANKS! I was totally braindead when I wrote that. My 4% is actually the 8.1% you mention, and the way you phrase it makes it much clearer too :)


Suppose you spend 9 hours looking at the code and one hour actually editing code, and the latter is the only part that can be made faster (if we are to believe grandparent). Now, suppose this technology makes you 4x faster at editing code. Then you'll spend 15 minutes instead of one hour editing the code, and a total of 9 hours and 15 minutes. This is roughly 8% less time than you would have otherwise spent. Productivity, being the reciprocal of the time it takes for you to do the task in question, this is an increase in productivity of roughly 8%.


I read it as a 4x speed up of 10% of the work. So what took 10% now takes 2.5%. this leaves 7.5% remaining. Assuming that time is spent doing the same thing then it's 100/(90+0.925*7.5) which is ~3% increase in useful work. I'm sure this math is wrong.


Pretty soon I'll only want politicians and business leaders with brain interfaces.


>computer/brain interaction here we go :d

Totalitarian control over thoughts, here we go...


Relax, we'll be back to banging the rocks together in the ruins of our cities before then. I think most people are grossly overestimating the rate of progress vs. the rate of environmental collapse.


True that.


For computers, we had chess and go as milestones. I wonder what the milestones for this are.


Not being able tell the synthesized input isn't real, I think.

Which is not a good target.

It's what everyone will want though. Because a) find me a more generally scoped killer app in any class (translation: literally infinite VC money), and b) it'll be what the media focuses on because of the obvious controversy.

And there will be tons of controversy. I picked up these tidbits years ago and have no sources, but I'm confident I'm quoting these correctly at least:

- Experimental direct electrical stimulation to the pleasure center of the brain produced a response where the subject didn't want the stimulation to stop, and classed it as surpassing all other experiences

- When monkeys were provided with a button that stimulated the sexually-related centers of the brain, all they did all day was push the button; the button had to be removed so they would eat and function normally

So, unfortunately, the target the world will want to hit is several leagues beyond safe.

The future will be fun...


Larry Niven wrote about 'wire-heads' in the Ringworld trilogy decades ago. Push a button for pleasure. Instantly completely addictive. So now we have confirmation he was right!


As someone who had one of the earliest cochlear implants done back in the '90's, I would say it'd be a non-invasive surgery (with minimal recovery/rehab time, say, a week), with near infinite options of modding the software without touching the hardware again. It'd be easier to implant toddlers (and younger) given brain plasticity, but that's a whole 'nother ethical argument that medicine has been dealing with since back then.

We certainly do live in interesting times.


That sounds more like a finish line than a milestone, but if it ever becomes available, sign me up!


Oh what don't they invent to make me watch ads!


Science should foremost follow a moral and ethical attitude instead of just trying out what seems to be possible. Who gives them the right to destroy the brain of an ape for dubious activities. More empathy please if science should take us somewhere better.


just trying out what seems to be possible

This done a) to increase knowledge and b) because that knowledge might eventually help humans.(just like you have in your life used multiple medical solutions which originated in animal research). Now you can argue that you don't care about this, which is your good right, but making it sound like they do it just for the sake of it is factually wrong.

Who gives them the right to destroy the brain of an ape for dubious activities.

It's a monkey, and 'destroy' is such an exaggeration it makes me think you don't actually know the hardware used for these experiments work. These electrode tips are in the micrometer scale and while I don't know the exact amount of neurons actually damaged it is a fraction of te total amount in that given region of the brain.


Who gives any right for anything?

The question is, how much moral value you want to grant to a life of a monkey (not ape in this case). Most people wouldn't give it anything close to human-life level, at which point we're potentially open for experiments like these.

Assuming they're not doing it just for the kicks, but actually trying to make helpful discoveries about brains, the moral calculus will favour this experiment.

And if you do think a monkey's life is as important as human life, then well... you'll find many other more important moral violations to solve that have nothing to do with scientific experiments.


They are doing it for dopamine. The specific dopamine pathway is probably one linked to social status.


An oversimplification, but yeah. The same reason anyone does anything: Because somehow and in some way it makes them feel good (or at least they anticipate that it will do so).


I hope, if not sure, they followed IRB (institutional review board) for their experiment. You’re right in your point that we, as scientists, shouldn’t simply do whatever is possible. On the other hand processes like IRB provides a mechanism for oversight where ethical issues such as those raised by you are considered.


It was for sure governed by their Institutional Animal Care and Use Committee (IACUC). IRBs are the human equivalent. Also, pedantic, but macaques are not apes; research on apes is extremely restricted today. Last I saw the NIH standards, they involved demonstrating that the research being done would benefit that individual animal, not just humans or even the species.


Science is not supposed to make anyone better. Science is a tool that can be wielded for any task, dubious or divine. Just like a knife.

Turn to philosophy or religion for morals.


People were "moral" before those words existed. Morals are simply emergent normative behavior.

Science is a human endeavor, and thus is affected by understood norms, like any other human endeavor.


Perhaps.

If a morality fails in the forest and nobody has language to describe it… is morality the equivalent of “sound is a pressure wave”, or the equivalent of “sounds is an auditory experience”?

Moral codes have changed wildly over the years (and by place in any year). Slavery was “fine”, until it wasn’t. The death penalty is a thing in some places today, but seen as repugnant in others. Shellfish were forbidden, until someone had a vision of god berating them for turning it down. Bestiality was legal in half of Europe twenty years ago, but not today. Weed and gay marriage were unbelievable twenty years ago, but are increasingly acceptable today.

I’m vegetarian (trying to be vegan) just in case animals have that hard to describe thing commonly called consciousness or self-awareness or personhood, but I don’t have enough certainty to condemn anyone who thinks I’m being silly by worrying about that. Heck, I don’t even have a concrete definition of the thing I worry animals might have in order to test against.

What is morality? Really?


Assuming no supernatural things, all thinking a human does happens in their brain. Therefore, a very reductive approach: morality is our high-level generalization of what makes us feel bad or good. Things like:

- murder is wrong, because any given human wants to live (self-preservation drive), and also usually wants people they care about to live (them dying causes feelings of hurt)

- theft is wrong, because if you take something from me that I need, or feel connection to, I feel hurt

- etc.

Our social drive ensures that if you generalize things like these into more abstract code and enforce it, less of those bad things happen.

This - both dislike for things that tend to be a part of pretty much all moral codes, and the tendency to create those codes - seems to be built into our brain's firmware. Under this reductive view, the only actual claim morality has to universality is that all of us have pretty much the same hardware/firmware, and thus share the same basic moral principles. That is, hypothetical sentient aliens from outer space would likely have a different moral code due to the different process through which their brain-equivalent appeared.


Mm.

Counter point: Different people mean different things by the word “murder”. Does abortion count? I’d say no, but plenty of people disagree with me, violently. Does the death penalty count? I’d say yes, but again, plenty disagree violently. Does it count if you kill foreigners after invading their land? Does it count if you kill a chimp? If you euthanise a dog? A hamster? If you kill a prawn for dinner? People disagree on all of these.

Or “theft”: is taxation theft? Or is avoiding taxation theft?


There are differences, but that is too be expected if one views morality as just another human abstraction. There are different human cultures, and people are raised with different influences in varying environments.

I find changes in morality as somewhat analogous to changes in language. They are both messy, ambiguous, chaotic, emergent processes that don't result in a single output.


>Moral codes have changed wildly over the years (and by place in any year). Slavery was “fine”, until it wasn’t. The death penalty is a thing in some places today, but seen as repugnant in other

Cultures have had varying definitions of all kinds of things, above and beyond morality. Energy, death, malaise, sun, blood, calx, gravity, atom, motion, heat, mind, vacuum, malaria. But we tend to say it makes more sense that those are real things, and they are not just the cultural beliefs about those things, or that the varying cultural beliefs prove that there is no underlying real thing.

There's a common and unfortunate conflation between (a) whatever societies happen to believe and (b) what morality actually is (if anything), in a normative sense. Sometimes it's referred to as the normative/descriptive divide. Saying "well gee, societies had different beliefs about right and wrong" doesn't necessarily establish anything about morality itself, it's just a descriptive approach to morality, which is one thing. It may just be that morality is normative, and our investigation of morality, our understanding of the natural world, and our societal willingness to face up to certain moral challenges (such as treating races equally or caring about animals) is something that changes over time, while morality stays the same.


Poorly-defined turtles all the way down.

(This is a more serious answer than it reads.)


Generally to discuss morals one needs some irrefutable core tenets. In science nothing is irrefutable. Science is not great for all things concerning man, because we don't have a proper logically solid formulation for many things. I would prefer we have some moral core tenets that don't need solid logical proof. Like, "It's bad to kill".

But to start from some irrefutable core tenets, generally creates only poor or no science. The main requirement of science is that in face of evidence, one must be ready to accept facts as they are.

So, one can and should have a moral discussion on what research is to be done, but should not say "That's not science! It's immoral!".

For example, if I was fervent believer that god had created nature heterosexual I might fervently deny the observed results that lesbian bonobo chimpanzees or gay mallards exist https://en.wikipedia.org/wiki/Homosexual_behavior_in_animals because that would be an "immoral" discovery.


Klasiaster's point is that just because you have that knife, doesn't mean you should stab people (or monkeys) with it.


Yeah. Still, it's not stabbing people at random, it's stabbing a few monkeys so the science gets done, for the people who are still alive.


For certain monkeys, those "few monkeys" are family. Just like your family. You wouldn't want your family stabbed to advance a species you couldn't care less about.


philosophy only


Different arguments you could make will work for different people, but I have to say - my daughter needed heart surgery when she was two, but scientists had experimented on animals to perfect a procedure that allowed the surgery to be done through a catheter, rather than open-heart. That gave my daughter a much greater chance of surviving the procedure, without complications. So I'm delighted they did the experiments.

I greatly appreciate how much scientists try to reduce suffering when they perform these experiments - you should really look in to what the approval process is for experimentation here in the US, and the extent that any given animal can be experimented on.

But when it comes down to it, my family is not vegan, we eat meat, we rescue animals (four cats so far), we donate to the Humane Society, and we're THRILLED scientists experimented on animals to learn how to save our daughter. Some day illness could rob her of sight or hearing, and some day these experiments could lead to restoring her senses. This is not a difficult equation for me.

I absolutely will agree that science needs moral and ethical oversight, but this is far from "just trying out what seems to be possible." And I personally think they should have permission to experiment like this on the brain of an ape, and I do not consider these activities dubious.

So, perhaps your arguments will work on other people. But probably not on anyone like me who has had their child's life so dramatically improved by science just like this.


Who gives you the right to destroy the brain of a cow or an octopus for eating? Humans kill animals all the time, animals kill animals all the time, if this one ape happened to die slightly differently and got us all closer to data nirvana then isn't that a nobler way to go?


I think this is a simplistic view. Extend your argument one step further. "If this one human happened to die slightly differently and got us all closer to data nirvana then isn't that a nobler way to go?" Maybe? But it's definitely morally problematic.


(I wanted to reply to this last night but I hit the rate limiter, heh.)

Whether your extension is morally problematic depends entirely on your moral framework. Modified trolley problems are a good way to examine this: Given a chance to save N people by killing 1, what would N need to be before you'd take that chance? Most people would be outraged at N=5 or maybe even N=100. I think a significant portion would accept at N=100,000.


Not everyone care about animal well being, I certainly don't, to me they are less than human.


If you think this way about an animal, then you have the foundation to think this way about some groups of humans.


When you use a knife to cut bread, you have the foundation to go and slaughter people. It's literally the same principle of action. It doesn't imply you'll do it.

Similarly, being able to do moral value comparisons doesn't imply you'll find human lives inferior.


Really? So when looking at the trolley problem, it's irrelevant to you if on the side track is a slug rather than a person?


You can kill the slug and still feel bad that a living organism was destroyed.


I have a hard time arguing the ethics of animal research knowing I ate a cheeseburger yesterday and didn't finish it. An animal died to be thrown away.

Without animal research I probably would have never been born, having been wiped out by some plague centuries ago.


In 1944 Adorno and Horkheimer already described that domination of nature is a central characteristic of instrumental rationality in Western civilization, and thanks to globalisation it is slowly becoming a global norm. We will see if our augmented brains, trusted and synced on the blockchain, will see the flooding of data centres coming. Makes you wonder how all this fossil fuel powering all this became fossils.


Science is about how things are, not how they should be. Does the sun revolve around the Earth?


> Science should foremost follow a moral and ethical attitude

And who should be the arbiter of what constitutes a “moral and ethical attitude”? You?


The problems with "moral and ethical" is all to often it turns into "give me money or make me the king".

I try to behave morally and ethically as I understand them but grow suspicious when others start bringing these things up.


Fire good, explosions bad?


This isn't really responsive to grandparent's comment, which seem more concerned with the scientific process than the actual findings.

Also, explosions can be good, too. The Nobel prize wasn't named after a terrorist :)


Sure it is, the point is you cant separate discovery between good and bad until after the fact with an opinion. GP thinks morals should guide self censorship of discovery before the discovery, like somehow discovers know repercussions beforehand and should avoid particular avenues because that will somehow stop others who ignored the review board.

It wont happen. Someone will try it. What does saying "dont look here" do?


That depends entirely on the ethics and morals you choose to adopt. There are certainly ethics that allow you to assess the moral worth of an action prior to knowing the result.

For instance, deontologists like Kant would not care about the outcome to determine whether an action is right or wrong. "Act only in accordance with that maxim through which you can at the same time will that it become a universal law"

Utilitarianists such as Mill would seek to maximize utility and would very much factor in the end result.


Namedropping philosophers like it's an argument is only relevant in theater. Will n scientists deciding not to look into something prevent it's discovery?


> Namedropping philosophers like it's an argument is only relevant in theater

I'm not namedropping for the sake of namedropping, and I don't know why you'd say philosophers are only relevant in theater.

We're discussing ethics and failing to understand the contributions of philosophy in that context is a major mistake (your reasoning is primarily utilitarianist and I'm not saying that's wrong, just what it is. Utilitarianism is very widely used in modern societies).

We're also debating science. Who do you think developed ideas of scientific processes; what knowledge is and how we attain it?

> Will n scientists deciding not to look into something prevent it's discovery?

If getting to a discovery requires unreasonably horrible actions I hope it'll at least be delayed until such time it can be made without such horrible actions. Many have suffered throughout history in the name of scientific progress.

Plenty of ethics are implemented in science already. That's a good and ever-evolving thing.


Point taken. I cant help but build my morals into my assumptions, so I assume we are not talking about doing things on individuals against their will. You kinda pointed at that possibility and I appreciate it.

This (what GP and this story is about) is voluntary. Like letting a primitive android into your house so you can order pizza, or carrying around a sophisticated connected computer that informs you. The next step is always tighter integration. Pretending that an ethical review board is going to prevent "closed source humans" is not going to work.

People understanding it is the only way I can see to prevent it (it's already the status quo). I'm honestly looking for suggestions.


This (what GP and this story is about) is voluntary.

How can a monkey give consent?


I don’t believe a monkey can.

I also don’t believe a human can, because we look at that nice doctor who is offering sincere advice and think “they have my best interests at heart”. We certainly don’t make rational judgments of probability, because if we as a species could do that, lotteries would not exist.


Klasiaster's point isn't about the results and its repercussions, it's about the process itself.


[flagged]


I fail to see how this helps the discussion on the topic. Perhaps I am taking your comments too seriously.



It's a Matrix reference.

/woosh


Can this be used to make lazy and clueless politicians smarter?


Please don't post like this here. We're trying for at least a little higher quality than internet-standard.


"Now, imagine that you had a device implanted in your brain that could shortcut the pathway and “inject” information straight into your premotor cortex."

"you" just "imagine"

Like "you" made it and had top-to-bottom control if it's makeup. Does it come with encrypted microcode updates? The NYT keeps writing better distillations of itself; clinging the idea that they [frame] tomorrow.

https://i.imgur.com/VUdcIou.jpg

I should stop, it's just for stroke patients and people who need it anyway. Why worry? Litho masks are a trade secret. Only technophobes want disabled people to have outdated chips.


This is the only worthwhile comment IMO. This is where discussion that isn't a joke would begin.

We can't even hold people who start wars of aggression accountable, we'd rather destroy food to keep prices up than feed the world, but brain/machine interfaces? That's all for helping people. Yeah, right. People who believe that have lost the plot on such a fundamental level that you don't even have to criticize them for them to be offended; you just have to not fall for it to have their insecurity come gushing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: