Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is a red herring because:

Firstly, it assumes Volvo is not sharing the parameters of the system. It seems unlikely that Uber is installing an automated driving system into these cars without the cooperation of Volvo, especially with the agreement to ultimately get 24,000 autonomous-system-equipped cars from them.

Secondly, if Uber could instead determine what it wants to know about the parameters by testing, then the question is irrelevant, as are the semantics.

Thirdly, it is presumably safe for humans to be driving cars without knowing the exact parameters, and so should not present any particular problem for the autonomous system - if the emergency brakes are triggered, it is likely to be a situation in which it is the right thing to happen, and possibly a result of the autonomous system failing. Just as for human drivers, an autonomous system is expected to usually stay within the parameters of the emergency system, without reference to those parameters, just by driving correctly. For example, if the emergency brakes come on to stop the car from hitting a pedestrian because the autonomous system failed to correctly identify the danger, what difference would it have made if the system knew the exact parameters of the emergency braking system?

Lastly, the road is an environment with a lot of uncertainty and unpredictability. If the system is so fragile that the tiny amount of uncertainty of not knowing the exact parameters of the automatic braking system raises safety concerns, then it is nowhere near being a system with the flexibility to drive safely.

It is possible that a competent autonomous driving system might supplant the current emergency braking system, in which case the way to proceed is to demonstrate it in the way I outlined in the last paragraph of my previous post.



Thanks for answering in so much detail - I think the last two points make a compelling case for not disabling the system, even in the true black box case, and the first two are very compelling in the real world, even if they don't apply to the thought experiment of an actual black box. You've broadly changed my mind on this issue :)


I should have said that your concern is valid where two systems might issue conflicting commands that could create a dangerous situation, it is just that I don't see it likely in this particular combination of systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: