LiDAR-as-Camera for End-to-End Driving
- PMID: 36905051
- PMCID: PMC10007091
- DOI: 10.3390/s23052845
LiDAR-as-Camera for End-to-End Driving
Abstract
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. The main goal of our study is to investigate how useful such images are as inputs to a self-driving neural network. We demonstrate that such LiDAR images are sufficient for the real-car road-following task. Models using these images as input perform at least as well as camera-based models in the tested conditions. Moreover, LiDAR images are less sensitive to weather conditions and lead to better generalization. In a secondary research direction, we reveal that the temporal smoothness of off-policy prediction sequences correlates with the actual on-policy driving ability equally well as the commonly used mean absolute error.
Keywords: LiDAR in autonomous driving; autonomous driving; end-to-end driving; evaluation; generalization.
Conflict of interest statement
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Figures
 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                 
              
              
              
              
                
                
                References
- 
    - Ly A.O., Akhloufi M. Learning to drive by imitation: An overview of deep behavior cloning methods. IEEE Trans. Intell. Veh. 2020;6:195–209. doi: 10.1109/TIV.2020.3002505. - DOI
 
- 
    - Huang Y., Chen Y. Autonomous driving with deep learning: A survey of state-of-art technologies. arXiv. 20202006.06091
 
- 
    - Yurtsever E., Lambert J., Carballo A., Takeda K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. arXiv. 20191906.05113
 
- 
    - Bansal M., Krizhevsky A., Ogale A. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. arXiv. 20181812.03079
 
Grants and funding
LinkOut - more resources
- Full Text Sources
 
        