Uber's system famously couldn't decide if someone pushing a bicycle was a pedestrian or a cyclist. It flip-flopped for a while before resolving the dichotomy by running her over.
I'm not familiar with the instance you're referring to, but at face value I don't understand the problem. It doesn't matter if it's a pedestrian or a cyclist—in both cases, the car should avoid crashing into them.
It shouldn't matter. But the system was designed to track trajectories of recognised objects. So every time it changed its mind about what it was looking at it started recalculating the trajectory. By the time it settled down it was too late to avoid the fatal accident.
Don't assume that self-driving software is designed with even a minimal level of competence.