Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a lot of progress being made in AI right now. Hard problems that were expected to take decades, are being beaten regularly. Who is to say how much AI could advance in the next 20-30 years? Do you really believe there's less than a 50% chance strong AI won't be invented in your lifetime?


There is a lot of progress being made in AI right now. Hard problems that were expected to take decades, are being beaten regularly. Who is to say how much AI could advance in the next 20-30 years?

"We've beaten some hard problems more quickly than expected therefore we'll likely beat other hard problems more quickly than expected" is logical induction. It's equivalent to "I just flipped a coin and got heads. I'll probably get heads again on the next flip." Don't do that. :)

Do you really believe there's less than a 50% chance strong AI won't be invented in your lifetime?

I don't know. I don't really see the value in speculating.


>I just flipped a coin and got heads. I'll probably get heads again on the next flip

Which is corrrect, if you don't know the true probability of flipping heads. You might find, for example, that it's a trick coin with two heads and no tails.

You absolutely can predict future progress from past progress. E.g. Moore's law held true for decades after the observation was made. If you see a technology advancing rapidly, then there is no reason to say it will stop in the near future!

>I don't really see the value in speculating.

Because everything depends on this prediction. The invention of AI will be the most significant event in the history of humanity. It will totally change the world. Or likely, destroy it. Being prepared for it is absolutely necessary.


No, not if you have a strong prior belief that coins in general are fair.

By your logic, you can predict the failure of AGI predictions by past failures of (every) AGI prediction.


>No, not if you have a strong prior belief

Why would you have a strong prior belief about the invention of AGI? Now you are claiming to have far more certainty than I am.

>By your logic, you can predict the failure of AGI predictions by past failures of (every) AGI prediction.

This logic is extremely flawed. First not every prediction was wrong. Many people predicted it would happen in 2045, a few in 2030.

Second there's no reason past predictions represent the accuracy of future predictions about the same thing. Predictions should get more accurate over time, and early predictions are expected to be wildly wrong.

And third there's anthropic bias. If they were right, then we wouldn't be here to speculate about it. We can only ever observe negative outcomes, therefore observing a negative outcome shouldn't update your priors at all.


I was refuting your coin argument, which was pretty ridiculous, if I do say so.

The vast majority of predictions were wrong.

Yes, the logic is flawed, that's why I said it was your logic.


>I don't really see the value in speculating.

There is all kinds of value in speculating. We do it all the time in things like war games. 'If neighbor $x attacked us, what would happen?, would they win?, what can we do to prevent this?'.

Of course this may in your mind hold very little value now, but I promise if and when it occurs you will change your mind quickly on that topic.


Do you think it's better to propose we won't, and not start preparing for a smarter-than-human intelligence. Or to propose we might, and that we should prepare for it to happen?


On the face of it we should definitely have a strategy for dealing with strong AI, but with no knowledge of what strong AI will look like how would we prepare for it? There's ostensibly nothing we can do. Until we make more progress in the field we can't make any preparations other than wild speculation. And that is what I see no value in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: