Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A human was beaten with some thousands of CPUS & GPUS. On a calorie level, the human is still more efficient.

On a time to learn these skills... going from zero (computer rolls off assembly line) to mastery, the computer wins.

Actually maybe the computer wins even on the caloric level, if you consider all the energy that was required to get the human to that point (and all the humans that didn't get to that point, but tried).



But the computer certainly does not win on the amount of training samples required. The human is at the same level as the computer now for Go, but the computer has had much more training samples as Lee Sedol could process in his lifetime.

The next step is to reduce the training time/samples for the computer to get the same performance.


That's silly. Why would you want to put human limitations on the computer? We don't artificially put computer limitations on the human.


Learning is the ability to generalise from examples. Learning is far easier to define than intelligence. Whether algorithms can learn better than humans (generalise better from the same training data) is actually probably a more interesting question than whether they can get better results given unlimited data.

EDIT: But come to think of it this is a bad example, because you don't need any training data at all to learn to play a game well. Computer programs can play against themselves and rediscover strategies that work well. It's just an advantage.


I don't want to put a limit on the computer, not at all. But I do think humans have an edge on computer because at least for now, humans can learn the same skills from less samples (at least in this example: Go)

Of course, if there are many samples, the computer can go through those faster, but if there are no samples already and the computer has to learn example by example as humans do as well, humans may still have an advantage.

Of course, this advantage will diminish as well as AI advances.


How do you count examples? The computer can generate its own examples by playing against itself. So in theory it needs 0 examples. This is not a useful metric at all.


I would count every played game as an example.

What I mean is that I am more impressed by anyone of anything that can do a task (go, golf, chess, learning a foreign language, doing the dishes even) well with just a single example, or e.g. an hour of training.

Being able to train in solitude is an advantage indeed. You need two humans to do this, but you also need two AlphaGo-instances as well.


Are you going to count all the games that the human played in their head too? What about the learning done in the human brain when sleeping? Do you count that too?


I don't think it's silly at all. I think you have a good point about this particular contest, but there are plenty of other applications where training data is prohibitively expensive or time-consuming to collect. One way to proceed is to work on making it easier to collect and curate this data, and another is to work on algorithms that require much less data to obtain good performance.


That's precisely my point. This is not a traditional machine learning scenario, and treating it as such is silly.


> but the computer has had much more training samples as Lee Sedol could process in his lifetime.

That's not obvious at all. I don't think you appreciate how rigorous and demanding the training of a Go world champion is, how utterly devoted to Go they need to be: http://lesswrong.com/lw/n8b/link_alphago_mastering_the_ancie...


That's a good point, but it's still orders of magnitude below what a human -- even a devoted one -- could ever review.

Of course, there are many ways you can do the comparison:

Time to build: The AlphaGo team didn't have billions of years of evolutionary tinkering to work with in refining biological heuristic/learning systems.

Hardware limits: though still more efficient at search than previous designs, AlphaGo still has a lot more storage space and inter-component bandwidth than a human brain, plus better latency. Will the algorithms improve to the point that they can perform well on an extremely restricted architecture?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: