Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 27;23(5):2637.
doi: 10.3390/s23052637.

Road User Position and Speed Estimation via Deep Learning from Calibrated Fisheye Videos

Affiliations

Road User Position and Speed Estimation via Deep Learning from Calibrated Fisheye Videos

Yves Berviller et al. Sensors (Basel). .

Abstract

In this paper, we present a deep learning processing flow aimed at Advanced Driving Assistance Systems (ADASs) for urban road users. We use a fine analysis of the optical setup of a fisheye camera and present a detailed procedure to obtain Global Navigation Satellite System (GNSS) coordinates along with the speed of the moving objects. The camera to world transform incorporates the lens distortion function. YOLOv4, re-trained with ortho-photographic fisheye images, provides road user detection. All the information extracted from the image by our system represents a small payload and can easily be broadcast to the road users. The results show that our system is able to properly classify and localize the detected objects in real time, even in low-light-illumination conditions. For an effective observation area of 20 m × 50 m, the error of the localization is in the order of one meter. Although an estimation of the velocities of the detected objects is carried out by offline processing with the FlowNet2 algorithm, the accuracy is quite good, with an error below one meter per second for urban speed range (0 to 15 m/s). Moreover, the almost ortho-photographic configuration of the imaging system ensures that the anonymity of all street users is guaranteed.

Keywords: ADAS; I2V; camera to world transform; deep learning; real time.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
I2C communication between supervisor system and vehicles.
Figure 2
Figure 2
Configuration and shooting coordinate systems.
Figure 3
Figure 3
Compared field of view of two lenses.
Figure 4
Figure 4
Radial mapping of Samyang CS II lens.
Figure 5
Figure 5
Image recording system.
Figure 6
Figure 6
Fisheye view of the area with H = 7 m.
Figure 7
Figure 7
Aerial photograph of the same area.
Figure 8
Figure 8
Calculation procedure of the average optical flow in a bounding box.
Figure 9
Figure 9
Optical flow determination.
Figure 10
Figure 10
Actual speed estimation by optical flow determination.
Figure 11
Figure 11
Estimation of speeds in private site and controlled conditions.
Figure 12
Figure 12
Estimation of the speed of anonymous users.
Figure 13
Figure 13
Metric coordinates in the WENS coordinate system.
Figure 14
Figure 14
Displacement on a spheroid.
Figure 15
Figure 15
Detection of pedestrians and vehicles in night conditions.
Figure 16
Figure 16
YOLOv7 trained on MS COCO only applied on the same image.
Figure 17
Figure 17
Positioning of users on a cadastral map (yellow: camera position, blue: cars and red: pedestrians).
Figure 18
Figure 18
Effective horizontal area in the dewarped image.

References

    1. Chen G., Mao Z., Yi H., Li X., Bai B., Liu M., Zhou H. Pedestrian Detection Based on Panoramic Depth Map Transformed from 3D-LiDAR Data. Period. Polytech. Electr. Eng. Comput. Sci. 2020;64:274–285. doi: 10.3311/PPee.14960. - DOI
    1. Zhang M., Fu R., Cheng W., Wang L., Ma Y. An Approach to Segment and Track-Based Pedestrian Detection from Four-Layer Laser Scanner Data. Sensors. 2019;19:5450. doi: 10.3390/s19245450. - DOI - PMC - PubMed
    1. Gupta A., Anpalagan A., Guan L., Khwaja A.S. Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues. Array. 2021;10:100057. doi: 10.1016/j.array.2021.100057. - DOI
    1. Gao T., Lai Z., Mei Z., Wu Q. Hybrid SVM-CNN Classification Technique for Moving Targets in Automotive FMCW Radar System; Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP); Xi’an, China. 23–25 October 2019; pp. 1–6.
    1. Anaya J.J., Ponz A., García F.T., Talavera E. Motorcycle detection for ADAS through camera and V2V Communication, a comparative analysis of two modern technologies. Expert Syst. Appl. 2017;77:148–159. doi: 10.1016/j.eswa.2017.01.032. - DOI

LinkOut - more resources