Positional relationship between sensors
Hi! Thanks for the very interesting data.
This data set has multiple cameras and GPS, but how many meters apart are each sensor?
I am having trouble finding this information.
Hey @hakuturu583 thanks for you interest in the dataset. Since the dataset is aimed at end-to-end learning we'd assumed the relative position of sensors is not required. Could you share how to plan to use this information ? And would approximate relative positions would be enough ?
@sandhawalia
Thank you for reply!
As you say, some end-to-end self-driving models do not require detailed sensor placement information, but I think there are cases where it would be good to have this information, such as when training a BEV model, and above all, if the L2D dataset users have this information, they can use the If the information is available, L2D dataset users can try out their own models on the self-driving simulator, such as AWSIM / Carla.
The method I would like to develop using the L2D Dataset is a method of learning while driving in a closed-loop environment, so I wanted to match the sensor placement in the simulator and the L2D Dataset as much as possible.
Thanks for the explanation @hakuturu583 . I will get back to you shortly. Since the dataset is collected with 60 difference vehicles in 30 german cities. The precise (mm level precision) of sensor placement for each vehicle won't be available. We can possibly provide you with the approximate (factory) sensor placement. Would that work for re-creating the sensor setup in simulation ?
Also do you have any thought on training a differentiable simulator like (https://wayve.ai/science/gaia/) instead of using a procedural one ?
Ideally, a rigorously calibrated vehicle would certainly be preferable, but I believe that a huge data set is valuable on its own, and a certain amount of error is a factor that should be absorbed by the model.
In some cases, the simulator settings are calculated from the catalog values of the factory installation, and I think that is fine!
Also do you have any thought on training a differentiable simulator like (https://wayve.ai/science/gaia/) instead of using a procedural one ?
Sorry for my late reply.
I had missed the last sentence.
I also considering it.
I think Open DWM is one of the good Open-Source neural simulator.
https://github.com/SenseTime-FVG/OpenDWM
Developing algorithms for robots is my hobby and I want to develop this algorithm as my hobby. (I am working at Japanese Autonomous driving company TIER IV.inc, but I am also a hobby roboticist.)
Since it is a hobby, I do not have access to a lot of GPUs such as A100 and find it difficult to use simulators such as GAIA.
And, I believe my algorithm works without these neural simulators.
Hey @hakuturu583 We've added camera calibration file here
All measurements are w.r.t origin located at cam_front_left
which corresponds to observation.images.front_left
. Let me know if you have any questions. Please note that since the dataset is collected over a fleet , the camera locations is expected to be approximate.
(I am working at Japanese Autonomous driving company TIER IV.inc, but I am also a hobby roboticist.)
Amazing! Didn't know about TIER IV. Something that might interest you and/or your colleagues at TIER IV.inc is our natural language search tool: Nutron for searching episodes by driving task/instructions within large scale multimodal datasets. You can try it out yourself. F.ex "its daytime, at the traffic lights make a multilane left turn"
Feel free to DM me for more details.