True, but still, the way I drive is different when there's a little child on the side of the road vs. when there's an adult on the side of the road, as there's a higher expectation of one doing something dumb and running into the road than the other, so it not being able to differentiate between the two isn't the best.
I mean, Tesla gets away with straight up not even recognizing children [1] and running them down in crosswalks in high visibility vests as demonstrated publicly over a year ago. Over a year later and they still have not fixed it or pulled it [2].
Why bother with safety when a criminal disregard for safety makes you the world’s largest car company and the world’s richest person?
> According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem. In August, Cruise announced the cuts to daytime ride operations in San Francisco but made no mention of its attempt to lower risk to local children.
Edit: looks like Cruise is not long for this world. Insiders seem to have lost faith and are starting to leak. I'd bet it simply cannot withstand any serious scrutiny into its safety levels, practices, and culture.
For Hackers, the money quote, IMO, is the one about competing technical cultures within Software^1:
> Koopman, who has a long career working on AV safety, faulted the data-driven culture of machine learning that leads tech companies to contemplate hazards only after they’ve encountered them, rather than before.
I haven't yet figured out how to effectively and efficiently communicate this mindset shift to folks educated primarily in ML culture. I am not sure I ever will. The closest I've come to an elevator pitch of the mindset shift is something like: "when human lives on the line and you're taking an absolutely massive number of samples IRL, doesn't it make sense to stop thinking in terms of analysis/probability and start thinking in terms of topology/nondeterminism? Ie, when you sample A LOT, unlikely shit happens. Manageable risks if you're selling ads I suppose. But not so acceptable if you're deciding whether a giant piece of unforgiving steel saw a bollard or your daughter."
[^1]: I read the quote-block in your post and immediately thought "Koopman". Then read the article and, sure enough, they're quoting Koopman. What a tragedy. The message was not only out there, but out there so loudly that the vague shape of the warning has a particular person's name attached to it in my mind. Yet, here we are.
Autonomous car devs seem to lack a mindset of "things that never happened before happen all the time"
No matter how many miles your car drives and how much data it collects, it will encounter a novel situation on the road. Unless it has higher levels of context / overriding safeguards / etc, a data driven only ML approach is going to fail dangerously.
One favorite example is the year old video of Tesla FSD attempting to unprotected left turn thru an oncoming trolley car while the center display 3D rendered the trolley car in motion. Clearly there is no overriding safety guardrail model above the path finding model. If the car can 3d render the object it is aware it exists.
And so we go on being perpetually "five years away" from self driving.
This is a common counter-argument, and sounds reasonable on face, and is even true in the limit. But we're not anywhere close to the limit. Reminding the dear reader of the actual context clears this point up quickly:
>> One favorite example is the year old video of Tesla FSD attempting to unprotected left turn thru an oncoming trolley car while the center display 3D rendered the trolley car in motion.
We're not asking for Nirvana. We're asking for not throwing up one's hands and declaring that perfect CV is impossible as one mows down pedestrians and trolley cars that one's CV system is clearly capable of identifying. For example.
No. The "effectively and efficiently" is important. It's not like you can't get the point across. It's just hard to do without lots of communication effort.
(In particular: self-driving car companies are already highly motivated to not kill children.)
There were clear signs that Cruise was rushing things with their expansion, but I didn’t know the performance was this bad. Based on developments from the past few days [1], their safety culture seems to be closer to Tesla’s than it is to Waymo’s.
I saw numerous Reddit posts of self driving cars causing mayhem. Every single one of them was a Cruise car. Waymo is doing something right. If the cruise fiasco forces more regulation that’s only a good thing. Hopefully waymo passes them all.
Your comment makes it seem like a bad thing. Waymo has the right approach: only applying their technology where they are confident it will not cause issues.
If a (usually lead) construction engineer notices (or is told about) a fault/design issue/problem, knows that it is dangerous, and does nothing about it, then something bad happens because of that, in many cases and many countries, the engineer ends up personally responsible for the outcome.
This should be expended to other industries as well.
It’s all engineering except “software engineering”. You sign it, you own it. The senior techcnical leads who knew about this should face legal repercussions.
It happens routinely in my industry (rail), which is primarily civil, electrical and systems engineering.
In all engineering degrees it is impressed upon you that you are responsible for your work and if you behave negligently or outside your competency you can be held accountable for the consequences of your actions.
Same with pure electrical... both at technician level (which is relatively simple, just a few standards to keep up with, choosing the right wires and devices) and at pure engineering level, where you have a lot of formulas instead of tables, and if you miscalculate something when connecting something huge, you can bring the whole grid down, stuff can burn, explode, people can get hurt, and you can be personally kept responsible... and that goes doubly so, if you've been warned in advance and knew that there was an issue.
After hearing about how often their cars need a human to intervene, I started wondering about that incident where a Cruise taxi ran over a person that had been thrown into its path, stopped, and then started up again to move to the curb.[1]
Did the car stop, notify the mothership, and then have a person direct it to pull out of the flow of traffic? How would we know? If the car moved on its own, that's bad. If the car moved after being contacting a "remote assistant agent", that's bad, too.
It sounds more like a path prediction problem than one of not seeing them at all. I'm guessing they expected it to still handle them as well as a human driver would, even if it relied on the adult pedestrian logic.
I have doubts as to this being the right choice, but it seems less unreasonable than what I thought from the title.
They were not able to recognize them as kids rather than generic humans. Kids behave differently and more erratically.
The details are not that imporant. The main point is that they were aware they were couldn't match human performance on kid safety yet still put their cars on the street. all while talking about the terrible toll of car crashes on children.
> Two months ago, Kyle Vogt, the chief executive of Cruise, choked up as he recounted how a driver had killed a 4-year-old girl in a stroller at a San Francisco intersection. “It barely made the news,” he said, pausing to collect himself. “Sorry. I get emotional.”
It's much more likely that he believes he's doing the right thing. It seems they did make adjustments to avoid kid related accidents from this thread, so it does check out as a "temporarily displaced savior" rather than just a bland, run of the mill faker or self-deluded psycho.
"Our cars could see the kids; they just chose to run into them anyways" is still pretty unreasonable. Relying on tests for one scenario and assuming it works for another is a pretty good way to...well, make cars that hit people, I guess.
For all the hopes that people wouldn't need to drive in a few years and that kids wouldn't need to get driver's licenses, seems pretty unlikely at this point unless you never leave a city with good public transit.
This will sound patronizing, but I ask this earnestly - how many ML pushers in this space actually drive?
I say this because in my personal tech circle, the guys I know most convinced autonomous cars are around the corner are literally the guys who do not drive. One is an unlicensed and within a decade of retirement, convinced he'll be able to use self driving cars by (early) retirement so he doesn't have to learn.
There are a ton (or at least some) people who live in cities post their undergrad who are comfortable with continuing some combination of living with their city-centric life and leaning on friends/partners a bit and using Uber.
I literally could not have taken my first engineering job had I not had a car and been able to drive--and, indeed, would not have been able to go on many interviews. Had to go to remote job sites and, in any case, would have been that weird guy if I couldn't just rent a car and drive.
I could live in a city today and mostly remote work but I'd be very restricted in who I could reasonably visit and the activities I could partake in both locally and on vacation.
On the coasts its a weird reverse class signifier in a way.
Only someone of enough means to live in a HCOL city and afford to pay for Ubers could survive into their 40s with no ability to drive.
Knowing how to drive even in a city, because you might occasionally need to is a far more normie middle class thing.
I'm on a work trip in a HCOL city running $80/day in Ubers. If I was in a more cheapskate company they'd probably had made me rent a Hertz and drive myself for the week rather than blow $500 on Ubers. Knowing I'd never be put in that situation by an employer is another class signifier.
Of course, in the scheme of things, you're maybe spending a couple hundred dollars more on the Ubers? That's nothing for many (as you say higher class) positions. I spend around $400 round trip on a car to the airport on my home end. Parking is expensive and I've also had issues.
Some combination of prices and convenience have in my experience shifted the norms though. I'd have reflexively rented a car in past times for a routine suburban business trip by air. Today, I'll spend at least a few minutes considering whether I sort of need a car for convenience/personal activities or whether Ubers would just be prohibitively expensive.
I'd add that the inability to drive closes off a lot of options. I regularly do a lot of personal activities that wouldn't really be practical without being able to drive. It's not so much urban but rural activities.
> “Our driverless operations have always performed higher than a human benchmark, and we constantly evaluate and mitigate new risks to continuously improve,” said Erik Moser, Cruise’s director of communications. “We have the lowest risk tolerance for contact with children and treat them with the highest safety priority. No vehicle — human operated or autonomous — will have zero risk of collision.”
Notice that this contains nothing of substance: no "human benchmark" is actually defined; nobody was asking if they constantly evaluate risks (do they also remember to breathe in and out every few seconds?); "the lowest risk tolerance" is meaningless, as is "the highest safety priority"; and obviously no vehicle will have "zero risk of collision", just as no vehicle has zero risk of being hit by a meteorite.
I was kind of shocked in 2019 to see those cars driving around SF. Most jurisdictions would never allow this in a million years.
Why would you allow some tech company to develop its technology at the risk of pedestrian safety? What benefit does this have to the people living in these neighborhoods?
They just do extremely long commutes from parts very far east. They're already in the car and already used to driving for a long time, may as well add Visalia to SF.
(Visalia is especially far, but I had a driver in SF tell me once that's where he drove in from.)
The vehicles seem to be incredibly safe when compared to human driven vehicles, so what’s the problem? Cruise has caused zero deaths. In that same time, 144 people have been killed by vehicles in San Francisco.
Low sample sizes leads to funny math.There are 400 Cruise vehicles in SF and 400,000 non-Cruise vehicles in SF.
LA Times re: SF " On average, 30 people die and more than 500 are severely injured in the city's streets each year"
Cruise severely injured 1 pedestrian this year, so sounds like it's 2x worse than human drivers at least. Since there are 1000x more of them and they've only caused 500x the severe injuries. When Cruise causes its first fatality it will immediately be 33x worse.
A person playing Russian Roulette for 1 round successfully may extrapolate that there is zero risk. A person who has played 1000 rounds doesn't exist.
It’s incorrect to say that Cruise severely injured a pedestrian. The pedestrian was hit by a human driver (who then fled the scene). The first collision knocked the pedestrian into the path of the Cruise vehicle. The Cruise vehicle tried to swerve to avoid the pedestrian, then slammed on the brakes and stopped within 20ft. A human driver in the same situation would have done the same or worse.
This is completely false. The Cruise vehicle stopped, then started again and dragged the woman under it. The majority of her injuries resulted from the dragging, which is entirely Cruise's fault.
Incorrect. The pedestrian was dragged for 20ft because that’s how long car took to stop. The slower the car would have been to react, the more the pedestrian would have been dragged.
And humans get confused and have people pinned under their cars all the time. Nobody wants to continue running over a person (even if it’s the best course of action in hindsight), and they often get out of the vehicle to check before even noticing that someone is underneath.
Not what some sources have said:
"State officials claimed that Cruise showed the DMV and California Highway Patrol onboard video of its car braking hard and stopping. But, Cruise failed to share footage showing the pedestrian being dragged as the car maneuvered to the side of the road, according to the state."
https://www.ktvu.com/news/cruise-failed-to-share-footage-of-...
Curious why they didn't share all the video if they think they did such an excellent job?
Cruise said they shared all the video from the crash with government agencies including the DMV, and the DMV says Cruise didn't because they managed to get an additional video from another government agency. So how could Cruise be hiding the video if another government agency has it? It seems much more likely that Cruise shared everything and there was some mix-up with the California DMV. I've encountered that situation many times. The most egregious example was them fining me for not paying registration fees on a vehicle I had sold over five years earlier.
I know there's nothing that will ever convince you that autonomous vehicles are safe, but please think about what safe autonomous vehicles would look like if they were rolled out. These sorts of stories get tons of eyeballs, so journalists are trying to find any example of an autonomous system harming people. Even better if you can blame tech companies for being negligent or irresponsible. So every single incident, even if it's caused by a psychopath hit-and-run driver, is blamed on the machine. And every detail is exaggerated and misconstrued to make the machine (and the manufacturer) seem as terrible as possible. So the stories simply don't matter. The figure of merit is fatalities per passenger mile. Currently, Cruise's is zero.
Cruise employees are riding in these vehicles, as are their family members. They walk near these vehicles every day. If they actually believed they were unsafe, they'd deploy the vehicles far away from their homes.
> The vehicles seem to be incredibly safe when compared to human driven vehicles
Well, according to the article they are often human-driven.
Anyhow, Caltrans estimates 8.8 million miles traveled by vehicles in SF daily. That's 6.4 billion miles during the 2-year period that Cruise did less than 700,000 miles.
In California there's 1.38 deaths per 100,000,000 miles traveled. Cruise doing ~1% of the mileage in which one would expect a death to occur doesn't seem like enough of a sample to declare a comparative safety advantage.
We can't reasonably measure the safety of individuals until they do something concrete, like get caught driving recklessly/drunk/distracted. If a company endangers people because they wanted to hit their timeline or save face or whatever, that's pure evil. They're simply incomparable.
We can address both things. The "but people are statistically worse" argument is a horrible, thoughtless "gotcha" in the form of whataboutism that makes light of people's lives.
A small percentage of people drive like shit in concrete ways all the time. Not signaling, driving too fast for the conditions, aggressive merges, cutting people off on turns.
It's just that nobody cares about enforcing low-level violations, and we just let them keep trying, keep rolling the dice, until their lack of consideration for other people kills someone.
Which is doubly curios that tech companies seem to have so much influence on the SF municipality, yet do not use it to fix the city's other issues, like homelessness and just shocking levels of petty crime. Their employees live there, after all. They live there.
But maybe it aligns with their vision of a free-for-all libertarian paradise, I am not sure.
This is the wider problem of AI in general. It is right some percent of the time. You can only use AI in situations where being wrong randomly, without anyone to sue or blame, is OK.
Why people keep trying to use AI to drive cars, I have no idea. It’s like starting with the hardest of hard problems immediately. Fools errand, if you work in self driving cars get out ASAP while the rest of AI is still hot and you can get paid. Because the other stuff is on cusp of the trough of despair given the increasingly desperate emails from salesmen I get from XYZ.ai startups.
If it follows the trajectory chess AI took, it will seem pathetic for quite some time, right up until the moment it outperforms every human on the planet by a wide margin.
Isn't it also like Go though where AI changes the game, and once humans adapt there are adversarial methods that AI trained on human players fail to understand?
So AI entering a space changes a space, and humans respond faster to that change.
I think this is an underconsidered part of autonomous cars. As soon as every vehicle in Manhattan is autonomous, people will bully them into sitting idle at green lights while walkers, bikers, scooters and everyone else flagrantly jaywalk in front of them.
With a human on human interaction there is a chance the driver will try to kill you, or you feel guilt at inconveniencing them. With an AI trained to drive safely, the safest thing to do will accept being bullied.
A similar thing will happen in a mixed human & autonomous environment where human drivers will abuse the AI drivers.
>As soon as every vehicle in Manhattan is autonomous, people will bully them into sitting idle at green lights while walkers, bikers, scooters and everyone else flagrantly jaywalk in front of them.
Sounds like pretty much how things work now so long as anything has paused the road traffic--which is all the time.
Yes, but the risk of being killed or not wanting to get honked at eventually makes pedestrians abide by the crosswalk signs on average, even if they jump into the crosswalk early and linger in it too late.
Knowing there is no risk of a deranged or distracted driver hitting pedestrians will make them even more aggressive. People are underestimating how impatient a bunch of midtown commuters will be when they know theres nothing preventing them from continuing their multi block walk completely uninhibited by traffic signals.