I've heard of OpenCog before and it, along with the Singularity crowd gives me a weird amateur, bullshitty, vague, generalist feeling that Noam Chomsky does. Basically - where's the beef? What has been done by either crowd apart from taking credit from those who do things in the actual industry/real world?
My fundamental aversion to both OpenCog and the entire Singularity crowd is a) their statements are so general as to the point of being useless and b) they don't do anything. Google makes search simple - go to google.com and find out. Google makes cars drive themselves - ask Nevada/California and if you're a member of the press - request a test drive today. IBM's Watson definitively beat world champions in front of everyone and before that they did it with Blue Gene.
Everyone in the other communities fall under this category: All talk - no walk.
The entirety of what I've gotten out of both groups is essentially little more than what religious people get out of going to a sermon at a church. The future will be grand, lots of bullshitty buzz words, lots of hand waving with huge claims - no hard calculations, no hard examples of what they've actually achieved.
I'll stick with Norvig/Google and his/their demonstrated achievements and knowledge over the talk, hype and vaporware projects of groups that have yet to show any hard progress apart from a bunch of lectures to rich people with a lot of vague words.
The SENS movement gives me the exact same feeling.
Hi, this is Ben Goertzel, the chief founder of the OpenCog AGI-focused software project and of the AGI conference series.
Comparing Google Search and IBM Watson to OpenCog and other early-stage research efforts is silly. Google Search and IBM Watson have taken fairly mature technologies, pioneered by others over decades of research, and productized them fantastically. OpenCog is a research project and is aimed at breaking fundamentally new research ground, not at productizing and scaling-up technologies already basically described in the academic literature.
Lecturing is a very small percentage of what those of us involved with OpenCog do. We are building complex software and developing associated theory. Indeed parts of our approach are speculative, and founded in intuition alongside math and empirics. That's how early-stage research often goes.
Of course you can trash all early-stage research as not having results yet. And the majority of early-stage research will fail, probably making you tend to feel vindicated and high and mighty in your skepticism ;p .... But then, a certain percentage of early-stage research will succeed, because of researchers having the guts to follow their intuitions in spite of the ceaseless tedious sniping of folks like you ;p ...
Chomsky's expertise is in linguistics and political analysis. Stephen Pinker's The Language Instinct is a good, readable introduction to some of Chomsky's work (and the wider field to which he is pivotal.) Chomsky's Manufacturing Consent is probably his classic work of political analysis.
You know in the soft sciences everyone is a quack because fundamentally they don't practice - wait for it - science. Science stops false connections by correctly attributing cause to its respective effect. Social sciences do not. For all intents and purposes, the vast majority of social science is either unreproducible, vague, mixing correlation with causation, uses dependent variables, poorly reasoned, statistical quirks, pushed by agendas or fundamentally flawed.
> are effective and powerful ideological institutions that carry out a system-supportive propaganda function by reliance on market forces, internalized assumptions, and self-censorship, and without overt coercion
That's pretty self-evident to the point of being, well, pointless - admen of the 60s made their bread using this, and the PR pioneers of the 30s were already experts. But please let's all listen to what he has to say next. Let me guess: killing people is bad, and not killing people is good. If you call that amazing thinking, I'd hate to see the idiotic version.
Even better:
> Geoffrey Sampson maintains that universal grammar theories are not falsifiable and are therefore pseudoscientific theory. He argues that the grammatical "rules" linguists posit are simply post-hoc observations about existing languages, rather than predictions about what is possible in a language. Similarly, Jeffrey Elman argues that the unlearnability of languages assumed by Universal Grammar is based on a too-strict, "worst-case" model of grammar, that is not in keeping with any actual grammar. In keeping with these points, James Hurford argues that the postulate of a language acquisition device (LAD) essentially amounts to the trivial claim that languages are learnt by humans, and thus, that the LAD is less a theory than an explanandum looking for theories.
Sampson, Roediger, Elman and Hurford are hardly alone in suggesting that several of the basic assumptions of Universal Grammar are unfounded. Indeed, a growing number of language acquisition researchers argue that the very idea of a strict rule-based grammar in any language flies in the face of what is known about how languages are spoken and how languages evolve over time. For instance, Morten Christiansen and Nick Chater have argued that the relatively fast-changing nature of language would prevent the slower-changing genetic structures from ever catching up, undermining the possibility of a genetically hard-wired universal grammar. In addition, it has been suggested that people learn about probabilistic patterns of word distributions in their language, rather than hard and fast rules (see the distributional hypothesis). It has also been proposed that the poverty of the stimulus problem can be largely avoided, if we assume that children employ similarity-based generalization strategies in language learning, generalizing about the usage of new words from similar words that they already know how to use.
Another way of defusing the poverty of the stimulus argument is to assume that if language learners notice the absence of classes of expressions in the input, they will hypothesize a restriction (a solution closely related to Bayesian reasoning). In a similar vein, language acquisition researcher Michael Ramscar has suggested that when children erroneously expect an ungrammatical form that then never occurs, the repeated failure of expectation serves as a form of implicit negative feedback that allows them to correct their errors over time. This implies that word learning is a probabilistic, error-driven process, rather than a process of fast mapping, as many nativists assume.
Finally, in the domain of field research, the Pirahã language is claimed to be a counterexample to the basic tenets of Universal Grammar. This research has been primarily led by Daniel Everett, a former Christian missionary. Among other things, this language is alleged to lack all evidence for recursion, including embedded clauses, as well as quantifiers and color terms. Some other linguists have argued, however, that some of these properties have been misanalyzed, and that others are actually expected under current theories of Universal Grammar.
> You know in the soft sciences everyone is a quack because fundamentally they don't practice - wait for it - science.
I wonder if you know you're being ironic here. Plenty of us have never even read Chomsky's political works and have been exposed to him solely through mentions in the CS literature, like the Dragon book, or more in-depth stuff on his theory of context-free grammars. There is a startling amount of proof that he not only writes about politics but, at one time or another, actually worked for a living and helped our field produce useful stuff.
Angry much? Have you actually read Chomsky, or are you just taking snippets from Wikipedia pages and saying told-you-so? Perhaps you should try reading Manufacturing Consent, it's a very careful and thorough work of analysis and not nearly as bleedingly obvious as you try and portray it.
One point: Sampson's criticisms about linguists producing post-hoc descriptions could just as easily have been (and were, I believe) applied to Newton's theories. Good science includes mapping and describing phenomena.
Another point: negative feedback on errors is not enough to account for the explosive speed of language acquisition in children. Not to say that this sort of feedback doesn't occur, or isn't useful, but it only really is used when children learn exceptions (I.e. irregular verb forms in English) or vocabulary (and even much of vocabulary is rule-generated.) Basic language rules are encoded, and children's brains only require minimal stimulus to record the specific settings of the rules for the language they are learning.
Everett (2005) has claimed that the grammar of Pirahã is exceptional in displaying 'inexplicable gaps', that these gaps follow from a cultural principle restricting communication to 'immediate experience', and that this principle has 'severe' consequences for work on universal grammar. We argue against each of these claims. Relying on the available documentation and descriptions of the language, especially the rich material in Everett 1986, 1987b, we argue that many of the exceptional grammatical 'gaps' supposedly characteristic of Pirahã are misanalyzed by Everett (2005) and are neither gaps nor exceptional among the world's languages. We find no evidence, for example, that Pirahã lacks embedded clauses, and in fact find strong syntactic and semantic evidence in favor of their existence in Pirahã Likewise, we find no evidence that Pirahã lacks quantifiers, as claimed by Everett (2005). Furthermore, most of the actual properties of the Pirahã constructions discussed by Everett (for example, the ban on prenominal possessor recursion and the behavior of WH-constructions) are familiar from languages whose speakers lack the cultural restrictions attributed to the Pirahã. Finally, following mostly Gonçalves (1993, 2000, 2001), we also question some of the empirical claims about Pirahã culture advanced by Everett in primary support of the 'immediate experience' restriction. We conclude that there is no evidence from Pirahã for the particular causal relation between culture and grammatical structure suggested by Everett. -- Pirahã Exceptionality: A Reassessment, http://dash.harvard.edu/handle/1/3597237
> social science is either unreproducible, vague, mixing correlation with causation, uses dependent variables, poorly reasoned, statistical quirks, pushed by agendas or fundamentally flawed...
Dr. Freud would have had a good deal to say about your apparent fixation with bovine feces...
Seriously though, your comments are playing fast and lose with a range of fields that you’re conflating and dismissing. Not all social sciences are “soft” and many have empirically-based real world applications that shape your (and everyone’s really) everyday lives.
So I figured it out. Basically, they take the idea of AGI seriously, and actually consider and talk about the repercussions, and therefore you dismiss them and their ideas as fringe and not worth investigating. I know that, because if you had investigated at all, you would see that all of those projects had really interesting results and these people are not being vague and hand-waving.
Not all of those projects I listed identify themselves as AGI. However, they should go in the same group.
And anyway, all of those projects have demonstrated progress. If you looked into them at all then you would see that. Ben Goertzel is using some aspects of his AGI research in mainstream (narrow) AI projects. OpenCog has released a number of solid demonstrations of current features. And Goertzel isn't hand-waving or bullshitting in his numerous books and scientific papers, for example Probabilistic Logic Networks: A Comprehensive Framework for Uncertain Inference (336 pages).
Voss is using his system at Adaptive AI as a commercial enterprise.
Qualcomm is funding Brain Corporation (Izhikevich et al) so obviously they are taking it seriously. A bakery in Tokyo has tested Brain Corporation's machine vision technology to power a semi-automated cashier system
I'm sympathetic to both Chomsky and Open Cog's aims.
I know Chomsky is a serious scientist with considerable accomplishment.
I have seen totally loony stuff in videos of AGI conferences (Tachyons and stuff). Open Cog may be better than that. But it hasn't proved that it is better than that.
The 1970-80's AI involved the Chomskyan paradigm of "draw up a naive design of the mind and/or brain and implement it". That failed so badly that you need a really good argument why you can do things differently - at least to move into mainstream science. That is, Ben Goertzel seems nice, smart and enthusiastic but I can't see him bringing anything new to the "table". Jeff Hawkins had interesting ideas with his temporal paradigm but it seemed like the model he chose to instantiate wasn't all that different from that used by the statistical-brute-force crowd. And Numenta has had really few announcements for a six year old enterprise.
And the companies paying for AI to be added to their systems. That happened from the start but it wasn't ever enough. What's different here from the stuff from twenty years ago?
AGI is mainstream science, these days. The keynote of the 2012 AAAI conference (the major mainstream AI research conference each year), by the President of AAAI, was largely about how the time has come for the AI field to refocus on human-level AI. He didn't use the term "AGI" but that was the crux of it.
The "AI winter" is over. Maybe another will come, but I doubt it.
What's different from 20 years ago? Hardware is way better. The Internet is way richer in data, and faster. Software libraries are way better. Our understanding of cognitive and neural science is way stronger. These factors conspire to make now a much better time to approach the AGI problem.
As for my own AGI research lacking anything new, IMO you think this because you are looking for the wrong sort of new thing. You're looking for some funky new algorithm or knowledge structure or something like that. But what's most novel in OpenCog is the mode of organization and interaction of the components, and the emergent structures associated with them. I realize it's a stretch for most folks to realize that the novel ingredients needed to make AGI lie in the domain of systemic organizational principles and emergent networks rather than novel algorithms, data structures or circuits -- but so it goes. It wouldn't be the first time that the mass of people were looking for the wrong kind of innovation, hmm?
Regarding tachyons in videos of AGI conferences, could you provide a reference? AGI conference talks are all based on refereed papers published by major scientific publishers. Some papers are stronger than others, but there's no quackery there.... (There have been "Future of AGI" workshops associated with the AGI conferences, which have had some freer-ranging speculative discussions in them; could you be referring to a comment an audience participant made in a discussion there?)
I wish you luck (well sort-of - with great power would come great responsibility and all-that).
I wasn't making up the tachyon guy. If I have time, I'll dig the video (it'd be a little hard since the hplus website reorganized). He was presenter and not an audience member, had at least a paper at one of these conferences. I can easily believe the AGI conferences have gotten better.
I would stick to the point that AGI needs to make clear how it will overcome previous problems - clear to mainstream science is useful for funding but clear to yourselves so you have ways to proceed is most important.
I don't necessarily agree exactly with Herb Dreyfus' critique but I think that in the minimum a counter-critique to his critique is needed to clarify how an AGI could work.
I mean, I have worked in computer vision (not that much even). There's no shortage of algorithms that solve problem X but nothing in particular weds them together. Confronted with a new vision problem Y, you are forced to choose one of these thousand algorithms and modify it manually. You get no benefit from the other 999.
As far as open source methodologies solving the AGI question, I've followed multiple open source projects. While certain things might indeed work well developed using the "bazaar" style, I haven't seen something as exacting a computer language come out of such a process - languages tend to require an individual designer working rather exactly - with helpers certainly but in many, many situations almost alone (look at Ruby, Perl, Python, etc). I would claim AGI would at least exactly as a computer language, possibly more-so. Further, just consider how the "software crisis", the limitations involved in producing large software with large numbers of people, expresses the absence of AGI. Essentially, to create AGI, you would need to solve something like a boot strapping problem so that you cause the intentions of the fifty or five thousand people working together to add up to more than what fifty or five thousand intentions normally add up to in normal software engineer. I suppose I believe some progress on a very basic level is needed to address.
To me, the AGI conference seems to have a much higher ratio of "speculative ideas"/"technical results" talks. Also to me, this pretty much justifies the "all talk - no walk" assessment.
This is Ben Goertzel, chief founder of the AGI conference series.
You are correct that the AGI conferences have a higher ratio of "speculative ideas"/"technical results" to ICML. This is intentional and I belief appropriate -- because AGI is at an earlier stage of development than machine learning, and because it's qualitatively different in character than machine learning.
Machine learning (in the sense that the term is now typically used, i.e. supervised classification, clustering, data minign, etc.) can be approached mainly via a narrowly disciplinary approach. Some cross-disciplinary ideas have proved valuable, e.g. GAs and neural nets, but the cross-disciplinary ideas there have quickly been "computer science ized"...
OTOH, I think AGI is inherently more complex and multifarious than ML as currently conceived, and hence requires more "out of the box" and freely multi-disciplinary thinking.
I think that in 10-15 years, when the AGI field is much more mature, the conferences will seem a bit more like ML conferences in terms of the percentage of papers reporting strong technical results. BUT, they will never seem as narrowly disciplinary as ML conferences, because AGI is a different sort of pursuit...
Thanks for the kind reply. I said ICML, but NIPS would have been a better point of reference -- since it was originally conceived as a cross-disciplinary enterprise. The NIPS TOC looks like this:
which indicates it's possible to have a selection of papers both technically sharp and interdisciplinary. We should all be so lucky to attract such a set of papers.
My fundamental aversion to both OpenCog and the entire Singularity crowd is a) their statements are so general as to the point of being useless and b) they don't do anything. Google makes search simple - go to google.com and find out. Google makes cars drive themselves - ask Nevada/California and if you're a member of the press - request a test drive today. IBM's Watson definitively beat world champions in front of everyone and before that they did it with Blue Gene.
Everyone in the other communities fall under this category: All talk - no walk.
The entirety of what I've gotten out of both groups is essentially little more than what religious people get out of going to a sermon at a church. The future will be grand, lots of bullshitty buzz words, lots of hand waving with huge claims - no hard calculations, no hard examples of what they've actually achieved.
I'll stick with Norvig/Google and his/their demonstrated achievements and knowledge over the talk, hype and vaporware projects of groups that have yet to show any hard progress apart from a bunch of lectures to rich people with a lot of vague words.
The SENS movement gives me the exact same feeling.
All talk - no walk.