Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess, to re-hash the same conversations people have been having for years -- if the expectation is that drivers must be alert and prepared to intervene, that places them in the gray area of not knowing whether or when to trust the system. Even if the self-driving system is relatively good, is the human+self-driving system really better?

In some sense, the self-driving system gets to be trained with experiences of failure, probably under the expectation that no intervention will occur (i.e. the "policy" cannot include human intervention). But the human driver doesn't get to learn from experience when to not trust the self-driving system (because getting it wrong can be disastrous).

Perhaps a missing piece that human drivers _should_ participate in, but almost certainly won't, is training in a simulator. There's a methodological challenge here -- ideally the simulator training should focus on scenarios where the self-driving system is weakest (i.e. where it's plausible that the driver will need to intervene), but those are likely also cases where we're worst at simulating the behavior of other cars/pedestrians/cyclists/loose object.



> is the human+self-driving system really better?

Definitely. My last couple vehicles have had Lane Keep Assist and Adaptive Cruise Control. I believe those two equate to about "Level 2" of autonomous driving. Whatever the technical definition is doesn't matter though, the important point is that those two features alone make driving long distances so much less mentally taxing.

The Volvo system in particular is great... keeping your vehicle firmly in the middle of the lane like it's on rails. Frankly, it's better at that task than I am and only requires slight and occasional steering wheel input to stay engaged -- just enough to know you're paying attention.

Before these systems, I'd be mentally exhausted after driving 2.5 hours to my parents house, especially if there was stop n go traffic and it took even longer. Since these systems, I arrive feeling mentally refreshed instead of drained. There is no way I'd buy a general purpose vehicle without these systems ever again and I look forward to any other ways that automated systems can augment my ability to drive safer, more precisely and with less effort.


ACC and Lane Keep was incredibly awesome when I was driving at night through northern Ontario. I was able to focus 80% of my attention scanning the ditches for moose and other large critters.


I crossed into BC from Idaho at night a few years back and shortly after noticed a LARGE animal off the side of the road. Pretty sure it was a moose, but it may have been an elk. Either way, I spent the next few hours paying extreme attention to the sides of the road. It made the drive much more stressful and having a little backup would have made a huge difference.


I relax more on longer drives with ACC.

I tend to drive faster, so slow drivers in the left lane drive me a bit batty.

With ACC on, that tendency is blunted heavily, and I just roll along, making occasional adjustments. But my temperament is much improved.


Agreed. I did a 1,500 mile (2,400 km) round-trip drive from Los Angeles to Utah a couple weeks ago using these same features in my Tesla Model 3 and it's a very different experience from previous trips in other cars.


LKA and ACC are two separate level 1 features. LKA is lateral and ACC longitudinal only.

A lane change assist which speeds up and steers to the other lane is level 2.


Great points.

If we had enough self-driving crash data (like a black box), where the fault was of the autonomous system, and where human intervention could have prevented the crash; that data could be used in the simulator. So the player would be experiencing simulations of actual crashes.


On the other hand, every crash we could simulate would be one that the AI should be able to learn to avoid, so it's not very clear if there are scenarios where it makes sense to 'train the driver' instead of the AI directly.


I imagine a good chunk of crash scenarios are along the lines of "easy for human intervention, difficult to program an AI to learn" such as broken/misleading road-aids (traffic lights) like the OP comment gave. These would be laid along a user-manual for licensing like a standard drivers' license I imagine once the AI restrictions are more obviously understood.


>Even if the self-driving system is relatively good, is the human+self-driving system really better?

Say for example, the designers never trained the car's system to recognize someone on a skateboard as an obstacle.

Obviously a lidar based system would spot the skater, but a camera+ai system depends on being trained to identify each type of obstacle as an obstacle and avoid it.


A person on a skateboard is still a person, would any AI trained to spot people not recognise that (at the same level as it spots people in general)?

Do you really need to train your AI to spot people on skateboards vs rollerskates vs rollerblades vs heelies vs sliding on ice vs traveling on a travelator vs standing on a moving vehicle vs ...?

OK, being able to recognise the differences and act accordingly might be useful but the principle of "person getting closer to vehicle" should hold sway for most situations?!?


> "person getting closer to vehicle" should hold sway for most situations?!?

I think the big thing is that not all of these 'persons' move in the same way. A person in the exact same location could both be a 'hazard' and 'not a hazard' based solely on their velocity, and their ability to change direction quickly (and are they chasing after something which isn't a hazard, but which is intercepting your driving path). They don't just have to identify the person, they have to be capable of forecasting that person's movement.

IOW, someone walking and someone on a skateboard are different risks, because the skateboard could move into your path of travel before your pass them, whereas someone walking would not.


Yes, but the skateboard is irrelevant is the predicted trajectory. I can see how it's useful to analyse the mode of movement to try to improve predictions but the human moving towards you at X speed should be the principle analysis; it seems, naively, to me; rather than worrying if they're on a skateboard or a snakeboard, etc..


The skateboard is relevant because it could theoretically make a neural network fail to identify a person that needs to be avoided

>human moving towards you at X speed

Lidar systems will work this way. Camera+AI not necessarily, it will still need a way to sense relative speed otherwise you are banking on an AI to identify an obstacle in an image.


Uber's system famously couldn't decide if someone pushing a bicycle was a pedestrian or a cyclist. It flip-flopped for a while before resolving the dichotomy by running her over.


I'm not familiar with the instance you're referring to, but at face value I don't understand the problem. It doesn't matter if it's a pedestrian or a cyclist—in both cases, the car should avoid crashing into them.


It shouldn't matter. But the system was designed to track trajectories of recognised objects. So every time it changed its mind about what it was looking at it started recalculating the trajectory. By the time it settled down it was too late to avoid the fatal accident.

Don't assume that self-driving software is designed with even a minimal level of competence.

Footnote: there have been many articles about this incident posted to HN. Here's one: https://news.ycombinator.com/item?id=19333239


The AI failed to classify what it saw as an obstacle to be avoided because it hadn't been trained for that particular instance.


More precisely, it wasn't program to recognize confusing objects as a hazard


,, Even if the self-driving system is relatively good, is the human+self-driving system really better?''

Probably yes, but they are not relatively good yet and certainly haven't been in the past. Most of the 3D video processing algorithms that are good enough to be used for self driving were created this year, so until they are productionized, we don't have any data on it. Still, probably we're just 1 or 2 papers down the way (to quote from 2 minute papers youtube channel :) ).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: