Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I could see something like this being their logic - maybe not with neural networks/machine learning specifically, but certainly "the only way to get to where we want to go is to do this".

My counter-counter-point would be that there's plenty of other companies that are doing this more safely, and also that ends don't justify the means when those means involve killing pedestrians.



Those other companies are rapidly going bankrupt because the economics of doing it the non-Tesla way seem impossible.

zoox was bought by Amazon for $1 billion, which seems a lot but it was the amount of money invested into company, so it was sold at cost to Amazon.

argo.ai just shutdown. VW and Ford spent several billions of dollars on that.

drive.ai shutdown and was acqui-hired by Apple for the car project that was reportedly just pushed to 2026

aurora is publicly traded and is on the ropes, reportedly trying to find a buyer before they run out of cash.

We'll see how long GM and Google will be willing to put ~$2 billion a year into Cruise / Waymo. I don't see them generating significant revenue any time soon.

Tesla and comma.ai have a model where they make money while making progress. Everyone else just burns unholy amounts of capital and that can last only so long.


So we're arguing it's better to offer a FSD that crashes rather than go bankrupt because maybe one day it won't?


No, I'm arguing that Waymo, Cruise and others following similar strategy will go bankrupt before delivering a working product and Tesla / Comma.ai won't.

As to crashes: the disengages part of your rebuttal is implied claim that Waymo / Cruise are perfectly safe.

Which they are not.

FSD have been deployed on 160 thousand cars. No fatalities so far. No major crashes.

Cruise has 30 cars in San Francisco and you get this:

> Driverless Cruise robotaxis stop working simultaneously, blocking San Francisco street

> Cruise robotaxis blocked traffic for hours on this San Francisco street

Another Cruise robotaxi stopped in the muni lane.

Waymo car also stopped in the middle of the road.

Neither FSD or Cruise or Waymo had fatalities.

They all had cases of bad driving.

This is not Safe-but-will-go-bankrupt vs. not-safe-but-won't-go-bankrupt.

It's: both approaches are unsafe today but one has a path to becoming safe eventually and the other doesn't, if only because of economic realities of spending $2 billion a year without line of sight for going break even.

https://www.theverge.com/2022/7/1/23191045/cruise-robotaxis-...

https://techcrunch.com/2022/06/30/cruise-robotaxis-blocked-t...


> As to crashes: the disengages part of your rebuttal is implied claim that Waymo / Cruise are perfectly safe.

I didn't mean to suggest that. I was responding to your words here:

> Tesla and comma.ai have a model where they make money while making progress.

I'm saying that it's not OK for a car company to keep going with dangerous self driving just because it can afford to.

> FSD have been deployed on 160 thousand cars. No fatalities so far. No major crashes.

That doesn't seem to be the case[1]. Though now we're going to squabble about definitions of "major" and also how is this reporting happening.

[1] https://www.latimes.com/business/story/2022-07-14/elon-musk-...


It's worse than that in my reading; the argument is entirely neutral on crashes, the only metric of success presented is not going out of business!


That's how we got cars, planes, medicine, bridges, and ... almost everything.

We can't wait for perfection. The question is how much risk are we willing to absorb.


If someone told you that they were going to revolutionize bridge building but it was going to take a bunch of catastrophes to get there how would feel about it?


The fact is they did not tell you but it happenned and still happens. Bridge designing and building uses safety factors, yet there have been bridges falling down, in Italy and Mexico as recent examples https://m.youtube.com/watch?v=hNLP5shZciU https://m.youtube.com/watch?v=YXmbkbr0L18 A few years ago I built realtime seismic impact monitoring and analysis technology and standard answer was along the lines of “we’ve got insurance if people die so why bother”


In terms of the grandparent's question:

If the FSD crashes, but much less than human drivers, I'm all for it.

But how much less is enough? Very interesting question!

1000:1 sure!

10:1 strong maybe!

11:10 probably not, given the moral and legal can of worms, though it'd be nice to save ~10% of injuries...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: