Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Humanity will start doing more and more things without understanding why, but because "the computer said so", and things work out when the computer says they will. Black boxes will explode in usage, and God forgive those that may be on the side where The computer says No.

This paragraph scares me. I'm not convinced that technology getting good enough to where it doesn't need to be understood to be used is a good thing in most cases. Especially when it's a black box.



What should scare you is it's becoming apparent procedures are being put in place so that when the 'computer says no' the person affected isn't given the information they need to fix the problem they have.

Banking computer closed your account. Why? We can't tell you why.

Google or Apple rejected your app from their app store. Why, won't tell you why.

Me trying to get my prescription refilled at Walgreens. Walgreens computer shows they never got it from the doctor. Doctor sent it 5 times. Turned out the computer was marking it invalid each time. Pharmacist isn't allowed to tell neither me or my doctor why.


They will not tell you not out of malice, but because they don't know. It's literally a black box, and we don't understand whys.

And it doesn't take a deep CNN to become a black box - any sufficient complex algorithm will do. I doubt anybody in the world understands all the details of how the latest CPU actually works, for example.


If we can't ask the AI to explain how it came to its conclusion it is not really intelligent, is it?

You would not believe a human is intelligent if he just tells you what to do without being able to explain why, would you?


This is the Fifth Law of Robotics: A robot shall be able to explain how it reached any decision. And there is the Fourth Law of Robotics: No robot shall have access to means of unlimited self-reproduction.


> If we can't ask the AI to explain how it came to its conclusion it is not really intelligent, is it?

Will it matter whether the AI is "really" intelligent when companies that implement the black box to spec thrive, while the companies that insist on having a human understand everything fall behind and fail?


We do that all the time with humans! “Having a hunch”, “trusting years of experience”.

Or think of the “uncanny valley” in 3D graphics -- most people can tell when a rendered image looks slightly off somehow, but most people can’t pinpoint the exact problems. “Something about the eyes just isn’t quite right”, etc.


most people can tell when a rendered image looks slightly off somehow, but most people can’t pinpoint the exact problems.

But experience people can. I've worked at a 3D animation studio and we had this 'old guy' who started his career as a classically trained artist and sculptor. He could look at a character model that we all agreed felt a bit 'off' and point out that the problem is how you'd modeled the muscles connecting the shoulders to the arm. Hell sometimes he'd look at a model that we all felt was fine and suggest tweaking the bridge of the nose and that made it much better.


Pick ten random people that you know. How many of them can explain to you how their Facebook post gets to grandma?


True, but they did not decide how to do it and did not implement it. I would expect the facebook engineer who implemented it to be able to explain it to me, at least the part which he did.


Isn't that exactly what happened with these UK post office workers?


That is precisely what happened with navigation skills. Just punch it all into your GPS and listen for the instructions from a nicely speaking robot.


And drive right into a lake because the gps told you to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: