The author should have submitted the last two paragraphs, and left it at that. The singularity does indeed seem to be more of a religious than a scientific vision - rapture for nerds. This is the only worthwhile sentence I find in this article.
John Horgan sounds like so many other critics declaiming that dramatic technological progress is impossible. History has in many examples proved critics wrong in a spectacular way. Although he doesn't say it directly, Horgan implies that "thinking" machines will not be created in at least 50 years - which to me sounds just as ridiculous as declaring heavier-than-air flight or space flight impossible.
The article doesn't really contain any decent points at all - the author cites neuroscience researchers east and west claiming that any dream of smarter-than-human machines are impossible, which seems to me as far from refuting any central argument. 'We don't understand the brain, so intelligence and consciousness can never be understood in a decent time-frame' seems to be as close to a key point as any I am able to interpret out of this.
The implication of his thought experiment is that our
psyches will never be totally reducible, computable,
predictable, and explainable".
Never explainable? Just like the stars, motion, nuclear physics and the rest of the human body?
Cochlear implamts are also cited as having "poor sound quality" - how does this prove that implants won't get better?
There is really nothing to see here - perhaps there would be if the author made his points a bit clearer.
We need to stop measuring the intelligence of computers by how many calculations per second they do. Right now we can put together computing systems that can do the amount of calculation under discussion, but fairly slowly. So what if it takes 10x as long to get a thought out? It's all in the program, and no one has the slightest clue how to program thought for now.
I'm also skeptical that the singularity will really move all that fast -- how do we know how much time it will take for each 1% of improvement? Just because a computer is capable of self-improvement doesn't mean that it will be fast self-improvement. I'd imagine that while the computer is better at each step, each step is also harder.
Good read. But if there is a conclusion to be drawn here, it's that we have a severe lack of common-sense POVs about human enhancement. On one end we have Ray Kurzweil, and on the other we have articles like this. The real useful path is provigil-like incremental steps, which are utterly marginalized and looked upon with suspicion.
And this kind of small steps can be seen quite often in the past few years... from a flash game that increases IQ to electrodes that greatly enhance memory making to 100 times more cognitive science articles then 20 years ago.
You don't have to upload to live 120, You just have to get over Alzheimer and cancer and strokes.
You don't have to be able to completely understand the brain in order to learn faster or be more productive. There are already many many things we can do, but as far as I can tell we have a problem of perception: making medicine to cure dandruff is commendable, but any kind of enhancement is frown upon. This also is religion at work.
Religion is based on faith--believing in the impossible, contrary to all empiric evidence. Belief in the Singularity is a simply a prediction based on current technology vectors. If you understand the theories behind artificial intelligence, and believe Moore's law will continue for 10-20 years, the Singularity is a logical conclusion. To call it a religion reveals a deep misunderstanding of the theory, and the motivations of the people who think it is a possibility.
It's not a religion in any complete sense, but it's kind of obvious that a gigantic leap of faith is involved (starting with the use of the definite article in the phrase "The Singularity", as was shrewdly pointed out by a recent commentator here), and I do think the leap can fairly be called religious. Why?
First, despite what you and others say about "empiric evidence", Moore's Law doesn't at all imply a concomitant leap in programming power (is our capacity growing exponentially?), and extrapolation from actual achievements to date points in a decidedly more mundane direction.
Second, the "singularity" predictions are the latest in a tradition of extravagant millenarian claims that people have been making about AI for decades. These claims have consistently turned out to be overblown, if not vacuous, in retrospect. Extrapolate from that historical trend and you arrive at quite a different (and rather more plausible) picture.
Third and most interestingly, the emotions and language surrounding the "singularity" are distinctly religious. This really stands out in what I've seen of the discourse. Since many advocates of "singularity" are of an intellectual cast that objects to the more popular religions, it seems to me as if some are directing their religious impulses into technology. As someone brilliantly says in the OP, it's "The Rapture For Nerds". (Notice that definite article again.)
By the way, I don't mean "religious" as a criticism per se. Personally, I think religious impulses are part of being human. What fascinates me about The Singularity discourse is the incongruence between its technological subject matter and its vividly religious emotional language.
Not all religious systems are based on faith. It seems to me that the faith aspect of some religions is a side effect of the fact that religion is motivated by a desire to explain and/or celebrate human purpose. To logically argue that (human) purpose is fundamental is a difficult thing to do, and as a result many religions are illogical and are based on faith. But there are many varieties of religion that don't rest on miracles. For example mysticism is a religion that is not based on dogma, but is a purely biological religion, in which understanding of purpose is derived from first-hand phenomenological experience (as most people who have taken LSD or reached some level of expertise in meditation can attest to). Sure, some mystics make stupid dogmatic statements and unfalsifiable claims related to these experiences, but that's just guilt by association. The Singularity could be another example of a religion, in this case a religion in which purpose is derived from a sense of the inevitable progress to a valorized endpoint (the "Point Omega" is what Teilhard de Chardin, the Catholic mystic and biologist termed it, and some Singularity-ists have adopted his terminology). It is not based on faith but on reasoned logic (which very well could be wrong), but the very zeal and the way in which human purpose is derived from it are enough reason to call it a religion.
One problem with this type of discussion is that religion is sometimes subject to the same kind of moving target definition that AI is subject to. If a philosophical system that was considered religious becomes viable/practical, it is defined out of being a religion? I myself look forward to the day when all religions are based on ideas that work, rather than belief in miracles.
So someone offers to upload your brain into a computer and you go along with it?
"In other words," said Benji, steering his curious little vehicle right over to Arthur, "there's a good chance that the structure of the question is encoded in the structure of your brain — so we want to buy it off you."
"What, the question?" said Arthur.
"Yes," said Ford and Trillian.
"For lots of money," said Zaphod. + "No, no," said Frankie, "it's the brain we want to buy."
"What!"
"I thought you said you could just read his brain electronically," protested Ford.
"Oh yes," said Frankie, "but we'd have to get it out first. It's got to be prepared."
"Treated," said Benji.
"Diced."
"Thank you," shouted Arthur, tipping up his chair and backing away from the table in horror.
"It could always be replaced," said Benji reasonably, "if you think it's important."
"Yes, an electronic brain," said Frankie, "a simple one would suffice."
"A simple one!" wailed Arthur.
"Yeah," said Zaphod with a sudden evil grin, "you'd just have to program it to say What? and I don't understand and Where's the tea? — who'd know the difference?"
"What?" cried Arthur, backing away still further.
"See what I mean?" said Zaphod and howled with pain because of something that Trillian did at that moment.
"I'd notice the difference," said Arthur.
"No you wouldn't," said Frankie mouse, "you'd be programmed not to."
Ford made for the door.
"Look, I'm sorry, mice old lads," he said. "I don't think we've got a deal."
For one thing, this would make the free/open versus proprietary software issue a lot more crucial than ever before. I can imagine accepting software updates (for software I have opted into) from a well-run open source project. Of course the idea of mind=software is itself only a very loose metaphor that fails to capture the nuances of development of an embodied mind. But I could imagine some computer extension to the brain that would add new cognitive potentialities to the existing repertoire of brain structures. Who knows, maybe it's not that far off? There are already direct brain interfaces for very simple kinds of control of machines.
If a programmable brain extension acts like other high-level brain modules then it should create a new range of abilities rather than dictate a rigid behavior. The former is something that could gain wide acceptance, and the latter would be an unethical way of enslaving people. The unethical rigid control of behavior also seems easier to implement with current knowledge since much more is known about low-level brain and spinal control networks than is known about high-level cortical abstraction.
I have a more-or-less open mind, but I can see significant problems. I mean, how many times have we been told that some big new thing is just "five years away" and yet it never arrives? I suspect a rapture-like singularity is not going to happen, but augmentation (which to some extent is already happening) will accelerate in a very interesting way.
Anyway, it certainly highlights the ethical side of software engineering, as you point out.
What to think about the singularity? I'm not sure this article tells us much either way. I'd guess interviews with top mechanical engineers in 1890 about the prospects of landing men on the moon within 80 years would have sounded much the same.
Only if those top engineers were convinced that landing on the moon would be the turning point for mankind becoming space travellers, and felt the need to prepare humanity for its coming. Actually that sort of thing did happen, but the point is just how far those absurd notions are from reality. No human has been further than the moon 40 years later.
I'm not sure a single bit of the wild speculation that occurred in 1890 will be useful when if we ever do begin large scale space travel.
You don't notice the rapture of the nerds themes in 'the singularity'? Or the 'god like powers' themes of the singularity? That is totally a religious pattern of thought.
Yep I see them there. But does that really have any bearing on the likely hood of it being true? Religion encompasses our greatest desires so if technology becomes sufficiently powerful wouldn't the fulfillment of those desires be a likely result?
Is there a danger that some people's thinking could be influenced by their desire for it to happen? Of course. But I think that just as many people are likely to dismiss it for irrational reasons as well. That is why it is important for people to keep their eye on the real science.
The claims are falsifiable. Just like any other prediction of the future. You just have to wait and see if the prediction becomes true. To falsify the proposition that 'the singularity' will occur within the next 40 years will take up to... 40 years.
In your terminology all predictions of the future are 'unscientific'. I'm all going to argue definitions of words with you. The important thing as with any prediction is the chances of it being true.
Specific claims with specific dates are falsifiable, but saying that there will be a singularity in the next 40 years is insufficiently specific (without carefully defining the term singularity, which very few singularitarians do), and proponents of such claims have a tendency to push back their prediction dates as the time when those predictions will need to come true approaches (AI, flying cars, the end of the world, etc.). It is the lack of specificity, both in the conditions of falsehood and in the time frame, that renders the singularitarians' claims as unscientific.
The work 'unscientific' is a very loaded word. It can be a powerful weapon, able to paint any assertion not presented like a scientific paper into the equivalent of astrology. It has been used in many instances to discredit quite reasonable (and correct) claims. Claims like 'smoking causes lung cancer'. Global Warming...
What would you have said to those who in 1994 where predicting the impact the internet would have on the world over the next 10-15 years?
There are those have attempted to provide precise definitions of elements of the singularity. The 'turing test' is an example and there are many others if you care to look.
The Turing Test is not an element of the singularity--it pre-exists the idea of the singularity by a sizable margin, and was an effort to define the idea of artificial intelligence. I would certainly encourage people who wish to believe in the coming singularity to set similar concrete goals for their vision of the future. But, even then, the idea of the singularity will not be science--although the specific, falsifiable claims might be.
Claims made in 1994 that the internet would change the world were not science (although some specific claims might have been)--this does not make them useless or wrong. It isn't a scientific claim that the Mona Lisa is a good painting, or that dogs are loyal. But most people would agree with both of these statements.
Most predictions (ignoring those of incredibly limited scope) about the future are unscientific. This, by itself, does not make them false. But trying to pretend to be science is rarely a good sign.
The Turing Test was made a lot earlier yes. The prediction that it would be passed before 2030 is strongly related to the singularity. If (when?) the cost of owning and operating one million real time turing-test-passing entities was less than $1000 per year (currency is another definitional problem) then I think we can safely say we'd have reached a concrete landmark on the road to 'the singularity'. I just came up with that in a few minutes - there are actually people who devote a lot more time to this and yes they do have some pretty concrete predictions.
Do you accept that notion that there are some activities that are more 'scientific' than others? Or does your definition of the word only encompass a boolean value for each activity? I am in the former camp and frankly believe that anyone who holds the 'absolute boolean value' notion probably has a mental disorder (I can't describe it any other way).
The original context of the article was that the singularity was more a religious idea than a scientific one. In context of the concept of a scale of 'how scientific' something is I believe the the singularity and the surrounding body of work associated with it is far closer to science than religion. That was the point I was trying to make.
So I think I get it - you think the word scientific should only apply to something if it say 98% perfectly applicable whereas my standards for use are a bit lower at maybe 85%. And of course the common criteria is probably somewhere around about 50%.
This is an emotional reaction that comes up frequently. Arguments are answered with "but that's not at all what we said, you just don't understand". (I don't mean "emotional" as a criticism, though I realize it probably sounds that way.) The problem is that it's a moving target. For example, the objection that "exponential advances in hardware are not matched by exponential advances in software, so extrapolation from the former doesn't apply to the latter," is met with something like, "You are uninformed if you think The Singularity has to do with extrapolation from Moore's Law. That's a common misunderstanding." One wonders how the misunderstanding got so common if that's not what the advocates said! Anyway, this moving-targetness is to me an indicator of religious attachment to a concept.
I stopped arguing for this stuff in online forums because it's not the most effective way to communicate it to a lot of people. Instead I'm working on writing up stuff that can be useful to a wider audience (instead of the few dozen people who would read this comment.) I'll submit it here when I eventually finish writing it up.
Secondly, there's a lot of fractured mutually incompatible thought that goes under the banner of "Singularity". There's a lot of people who do think it has to do with some sort of intersecting exponential curves (this is the Kurzweil school of thought).
"One wonders how the misunderstanding got so common if that's not what the advocates said!"
There is no "the advocates", because "Singularitarianism" isn't a single movement. The fact that it's fractured only indicates that there's three (or more) different camps -- these are different people saying different things, not the same people acting as a 'moving target'. I don't see how that makes it a religion. And you're right that there's a decent chunk of people who are dreaming of utopia and bliss and do take a Sci-Fi sort of attitude toward it -- but there are good scientific reasons to work on these issues, and good reasons to be worried about the implications.
For what it's worth, my claim basically comes down to two things: 1) AI is feasible. 2) There will be profound changes when AI appears on the scene. And we need to worry about the big issues relating to this.
I hate the term 'singularity'. It's too associated with Kurzweil's take on things, and it evokes connotations of a 'hard takeoff'. I prefer to just talk about AI's importance, feasibility, implications, and dangers.
John Horgan sounds like so many other critics declaiming that dramatic technological progress is impossible. History has in many examples proved critics wrong in a spectacular way. Although he doesn't say it directly, Horgan implies that "thinking" machines will not be created in at least 50 years - which to me sounds just as ridiculous as declaring heavier-than-air flight or space flight impossible.
The article doesn't really contain any decent points at all - the author cites neuroscience researchers east and west claiming that any dream of smarter-than-human machines are impossible, which seems to me as far from refuting any central argument. 'We don't understand the brain, so intelligence and consciousness can never be understood in a decent time-frame' seems to be as close to a key point as any I am able to interpret out of this.
Never explainable? Just like the stars, motion, nuclear physics and the rest of the human body?Cochlear implamts are also cited as having "poor sound quality" - how does this prove that implants won't get better?
There is really nothing to see here - perhaps there would be if the author made his points a bit clearer.