The exact circumstances of the struck pedestrian likely violated many [possibly curde] bayesian priors for what to consider a positive collision threat: A rogue pedestrian at an unflagged portion of road, at an odd time of night, on a road that doesn't usually have pedestrians, with a bike moving perpendicular across the road (instead of parallel along it). Each one of these had been learned to be a high false positive to true negative ratio.
The product of these rarely encountered events (even independently) allowed a higher level algorithm to "score" the approaching unverified object through bayesian inference, below the level of a reasonable expectation of human collision.
In a way, this system should never be expected to become overly reckless or feckless as compared with even top human drivers in the long run: close calls of this nature should be input to the system (perhaps deliberately through staged QA?) and added to the model of collision threat identification.
Should the car always respond by slow down as the mandatary response when there is object in front?
Are you implying that the algorithm would conclude that because it's unlikely that there is an object in that circumstance, it is ok to disapprove its own observation that there is some thing in front of the car? That sounds utterly nonsensical.
Exactly. I'd expect collision-prevention to have absolute priority over any other system. It should not be possible for any logic bug or combination of environmental factors to make the car run into a large object in its field of vision. There's no need for statistics here--an overriding 'don't run into stuff' rule will suffice.
If there's truly no possible route to avoid some collision and therefore the car needs to decide which collision is preferred, then the statistics can kick in, but in this Uber scenario, it shouldn't have gotten that far. From the outside, it seems like a fundamental design issue.
> I'd expect collision-prevention to have absolute priority ..to [not] make the car run into a large object in its field of vision.
This is a good point.
But I think we've all placed our "toes over the curb" with a car hurtling towards us in preparation to cross the street as soon as the car has passed. No system can stop at each instance of this, nor can it recover past the point-of-no-return created each time this event occurs.
What constitutes "the curb" is subjective when it's not a raised sidewalk. Surely you don't have to be "on the grass" of a multilane high speed road but merely somewhere on the shoulder to claim expected pedestrian lane status. But that (may!) also imply you're an 'awaiting street crosser' not an 'engaged street crosser'. That's where the high level stats comes in.
The toes over the curb thing is a potential collision threat. It's fine to statistically categorize and prioritize these, as they're everywhere. But as soon as it becomes an imminent threat (i.e. is directly in the vehicle's path), avoiding the collision should immediately become the overriding objective. If the collision is truly unavoidable (this should very rarely be the case with an AV), then it should still be slamming on the breaks and doing everything possible to lessen the force of the impact.
In many of the cities where Uber would like to deploy self-driving cars, a "rogue pedestrian" crossing where there is no marked crosswalk would be at least a multiple-times-per-day event. Any system trained to treat that as an ultra-rare anomaly would be a killing machine if unleashed in such a city.
As much as "rogue pedestrian" seems like blaming the victim, if you go actually did go "rogue" at any truck stop in this country, I suspect you would not survive one night. Trucks Stops and Times Square are two ends to a spectrum of expectations of erratic pedestrian behavior.
I want (and expect the system already does) account for this difference. And I also suspect the model to apply this pedestrian expectation modelling is imperfect and could use improvement. I don't know how to make it perfect, or if that's even a reasonable goal.
That is the classic self-driving tradeoffs question, no?
I believe rather than absolutes, the solution is her is properly identifying a serious crash causing injury was missed. Regardless if it was ID'd as human, it was still someone pushing a shopping cart full of bags across a highway intersection that could have been detected. Even if it wasn't human, and totally dangerous and improper crossing at night time on a busy highway, it should have stopped even for the drivers own safety.
The product of these rarely encountered events (even independently) allowed a higher level algorithm to "score" the approaching unverified object through bayesian inference, below the level of a reasonable expectation of human collision.
In a way, this system should never be expected to become overly reckless or feckless as compared with even top human drivers in the long run: close calls of this nature should be input to the system (perhaps deliberately through staged QA?) and added to the model of collision threat identification.