Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI100: One Hundred Year Study on Artificial Intelligence (stanford.edu)
45 points by fitzwatermellow on Jan 2, 2016 | hide | past | favorite | 13 comments


Is there a history of similar efforts (albeit in different topic areas)?


In order to have intelligence, not just pattern recognition, systems must have analog of emotions.


Do you think emotion is necessary for all (not just human) intelligence? Or do you think emotional intelligence merely is one form of intelligence? Or perhaps it helps other forms? How do you define (what do you mean by) intelligence?

I would be more inclined to agree with your statement if it was restated as "In order to act in a human-like way, systems probably need an analog of human emotions."


Just incase anyone fails to spot the implications of AI without emotions, we already have examples of non artificial intelligent sentient beings without the "emotion" of empathy, we call them psychopaths and sociopaths

I'd like non sociopath/psychopath AI please. Doesn't seem too much to ask for considering the possible fallout.


This is nonsense. They do have emotions. Hunger is one of them. Self-preservation is another. Pain is also very important.


It is even in the language - an emotion is what moves on. Hunger, pain, cold, fear. We learn by moving.)


But in order to recognize those emotions it must have the ability to do pattern recognition that recognises those emotions. Otherwise its just an animal.


You're still anthropomorphising too much by comparing it with an animal. A large amount of an animal's behaviour is genetically determined, wrought by Mother Nature step by step in thousands of ad-hoc adaptations, and we simply don't know what many of those adaptations are. They come together in something of a cohesive whole, each part depending intricately on all previous parts, and that's why we socially understand dogs to the extent that we do: lots and lots of the machinery that makes us both is shared.

By contrast, hopefully we'll design our AIs according to, say, engineering principles. That way, we can know what its motivation is, which I'm sure you'll agree is very important when creating something that is potentially substantially smarter than us. This is so unlike any other intelligence we interact with, that there's no particular reason we should be able to compare it to them.


My comparison is actually the pattern recognizing feedback loop not the animal part. Its just that humans can reflect on their emotions, animals cannot.

AI will be able reflect in even greater ways because its sensory input will be even more omnipresent.


Someone knows something about how to get sustainable research funding.


No, Horowitz, who set this up and heads Microsoft Research, is paying for this. Or at least a lot of this.

Horowitz has a decent track record. If this were a Paul Allen initiative, it wouldn't end well.


It sounds like asking how to get startup funding


Committees .... Bleh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: