Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Uber's system famously couldn't decide if someone pushing a bicycle was a pedestrian or a cyclist. It flip-flopped for a while before resolving the dichotomy by running her over.


I'm not familiar with the instance you're referring to, but at face value I don't understand the problem. It doesn't matter if it's a pedestrian or a cyclist—in both cases, the car should avoid crashing into them.


It shouldn't matter. But the system was designed to track trajectories of recognised objects. So every time it changed its mind about what it was looking at it started recalculating the trajectory. By the time it settled down it was too late to avoid the fatal accident.

Don't assume that self-driving software is designed with even a minimal level of competence.

Footnote: there have been many articles about this incident posted to HN. Here's one: https://news.ycombinator.com/item?id=19333239


The AI failed to classify what it saw as an obstacle to be avoided because it hadn't been trained for that particular instance.


More precisely, it wasn't program to recognize confusing objects as a hazard




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: