Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you don't have to do object recognition on the point cloud to know what you shouldn't be bumping into - since the data comes back "already 3D"

This is true of any pointcloud system though. No system returns density though, so you still have to make assumptions about what is worth running into at what resolution. So if a leaf falls in front of your car, you don't immediately slam on the brakes.

I don't know of any vision systems that do object recognition immediately, in fact I'm not even sure that is possible - or desirable. I guess if you baked segmentation into the firmware it would serve as a fast classifier, but AFAIK nobody is even talking about doing that as you would need a very well trained vision library and classifier to deploy to FPGA.

I make the distinction because we build fairly dense pointclouds with monocular RGB cameras and VSLAM. LIDAR isn't required anymore. LIDAR is faster with lower error tolerances today, but nowhere near as cheap or portable. I suspect monocular SLAM will overtake LIDAR in the next couple of years as it's getting faster and more accurate everyday.



I agree VSLAM is good and getting better everyday, but I don't necessarily agree it'll displace LiDAR for autonomous vehicles. The first generations will probably have both. Sure, LiDAR is more expensive, but we're talking about avoiding collisions here -- I believe the extra safety is worth it. I don't think portability is not a big issue for cars (though it is a bigger concern for smaller aerial vehicles). LiDAR will still work in the dark, and potentially with much more range than cameras can. Having worked with vehicles using both technologies, I'd feel far better on the LiDAR-equipped one.


100% agree currently. Your point about dark I was totally overlooking, so I think that's the big difference that makes LiDAR better long term until something better comes along.


You make really good points all around. The part we're not so sure about is if there's no place for LIDAR when discussing autonomous driving.

As you mentioned we'll have to see where economics takes over as cameras are much cheaper, but less accurate.

I'm curious if your camera-based solution is for outdoor or automotive use. If it is meant for outdoor use, how is performance with oncoming headlights, a low-horizon sun, or in snow?


Ours is all environment, but for mobile AR and not autonomous driving. So our draw distances don't need to be as fast and we have more persistence and more frames per meter than a driving system would have at 60km/h. That said, if we weren't restricted to mobile hardware we could probably get close to matching LIDAR speed and resolution.

The snow thing is interesting actually, but in a different way than you might think. Because we do loop closure it makes matching scenes that are temporally separated (eg. no snow vs snowy or occluded by leaves etc...) very hard.


I'm not sure if I understand you correctly. But Mobileye, probably the most prominent company in the field, does object recognition? And they design their own ASICs.

https://www.youtube.com/watch?v=jKfwHsHUdVc


Right, but you don't have to in order to get 3D depth data from a point cloud.


If LIDAR becomes cheaper though, then what wins? It should take less processing and mean lower latency I would think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: