by coderunner on 4/19/19, 6:57 PM with 6 comments
1. Are the trained object detectors in the simulation applied to real world data also or is only the part that makes decisions transferred to the real vehicle (e.g. it's safe to turn left here) while detectors trained on real world images of cars, people, etc. used?
2. Tangentially, I thought that in general detectors trained on computer generated images was not very applicable to real world images. eg training on a bunch of images of 3d modeled humans won't work well with testing on pictures of real humans. Is this not true?
by hacoo on 4/20/19, 2:51 AM
There are some situations where 3D simulation is useful, though. First, it allows you to run your AV software in its entirety (i.e., not spoofing perception), making for a very complete integration test. A 3D sim can capture complex, interesting occlusions that other sims cannot. Another fairly common use case is experimenting with new sensor setups before they're added to the car.
As for training, it's mostly research at this point. I think there's promise in using synthetic data to supplement real-world data training data for perception systems.
There are a number of companies trying to market simulation 'platforms' to AV makers. I think there's the potential for one of these products to gain traction -- but it's a difficult sell. AVs are enormously complicated, a 3rd party product would need to both beat in-house sims and support a lot of very specific (and likely propriety) AV features.
by Datenstrom on 4/19/19, 8:54 PM
by natch on 4/19/19, 9:43 PM
It doesn’t have to be computer-generated images. It can also be computer—altered images (think n° rotation, blurring, cropping, etc.) which should work pretty well in part because real world images are sometimes rotated, blurred, cropped, etc.