Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun 20;23(12):5767.
doi: 10.3390/s23125767.

High-Dynamic-Range Tone Mapping in Intelligent Automotive Systems

Affiliations

High-Dynamic-Range Tone Mapping in Intelligent Automotive Systems

Ivana Shopovska et al. Sensors (Basel). .

Abstract

Intelligent driver assistance systems are becoming increasingly popular in modern passenger vehicles. A crucial component of intelligent vehicles is the ability to detect vulnerable road users (VRUs) for an early and safe response. However, standard imaging sensors perform poorly in conditions of strong illumination contrast, such as approaching a tunnel or at night, due to their dynamic range limitations. In this paper, we focus on the use of high-dynamic-range (HDR) imaging sensors in vehicle perception systems and the subsequent need for tone mapping of the acquired data into a standard 8-bit representation. To our knowledge, no previous studies have evaluated the impact of tone mapping on object detection performance. We investigate the potential for optimizing HDR tone mapping to achieve a natural image appearance while facilitating object detection of state-of-the-art detectors designed for standard dynamic range (SDR) images. Our proposed approach relies on a lightweight convolutional neural network (CNN) that tone maps HDR video frames into a standard 8-bit representation. We introduce a novel training approach called detection-informed tone mapping (DI-TM) and evaluate its performance with respect to its effectiveness and robustness in various scene conditions, as well as its performance relative to an existing state-of-the-art tone mapping method. The results show that the proposed DI-TM method achieves the best results in terms of detection performance metrics in challenging dynamic range conditions, while both methods perform well in typical, non-challenging conditions. In challenging conditions, our method improves the detection F2 score by 13%. Compared to SDR images, the increase in F2 score is 49%.

Keywords: autonomous driving; deep learning; high dynamic range; object detection; tone mapping.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The proposed tone mapping architecture was inspired by ExpandNet [21], and it was simplified by discarding a branch of layers called the “dilation branch”, which operates on the full resolution with a wide perceptive field and is therefore computationally complex. The network is comprised of convolutional layers followed by ReLU activations. The global branch spatially down-samples the feature maps in each subsequent layer through skip convolutions, and the local branch operates at the original resolution. The fusion layers combine the local and global features into an output tone-mapped image.
Figure 2
Figure 2
Flowchart of the process of creating synthetic HDR training data from SDR images. The blue blocks represent the data pre-processing steps that comprise the proposed detection-informed training procedure. It focuses on creating realistic and challenging training conditions. The training inputs that are created are an HDR input image and a crop from the same image centered at a known object location, and they are coupled with a training ground-truth (target) SDR image.
Figure 3
Figure 3
Example of simulating different amounts of Poisson noise to augment the training set and create a model robust to noise.
Figure 4
Figure 4
An illustration of the proposed training approach using crops at object-centered locations to focus on reconstruction of details. The size of the convolution kernels is indicated by the numbers at the corresponding feature maps of each layer.
Figure 5
Figure 5
Example of the performance of the reference SOTA tone mapping method Farbman et al. [9] and the proposed DI-TM in variable scene dynamic range conditions. The green bounding boxes represent correct detection outputs (true positives) for “person”, “car”, and “traffic light” combined. The proposed method DI-TM is more robust in variable and extreme contrast conditions.
Figure 6
Figure 6
An example of a challenging scene in our dataset of SDR and true HDR images. In an SDR representation, much of the contrast at the object edges is lost, and the objects in the darkness are invisible to the detector. HDR images preserve fine intensity differences, and our tone mapping method enhances the details such that they become visible for the detector, as well as for visualization to a human driver.
Figure 7
Figure 7
Qualitative evaluation of the robustness of Farbman et al. [9] vs. proposed model DI-TM1 in extremely challenging high-contrast night-time scenes.

Similar articles

Cited by

References

    1. Macek K. Pedestrian Traffic Fatalities by State: 2021 Preliminary Data. Governors Highway Safety Association (GHSA); Washington, DC, USA: 2022. Technical Report.
    1. NHTSA’s National Center for Statistics and Analysis . Pedestrians: 2017 Data. Traffic Safety Facts Report No. DOT HS 812 681. U.S. Department of Transportation; Washington, DC, USA: 2019.
    1. Teoh E.R., Kidd D.G. Rage against the machine? Google’s self-driving cars versus human drivers. J. Saf. Res. 2017;63:57–60. doi: 10.1016/j.jsr.2017.08.008. - DOI - PubMed
    1. Kalra N., Paddock S.M. Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transp. Res. Part A Policy Pract. 2016;94:182–193. doi: 10.1016/j.tra.2016.09.010. - DOI
    1. Di X., Shi R. A survey on autonomous vehicle control in the era of mixed-autonomy: From physics-based to AI-guided driving policy learning. Transp. Res. Part C Emerg. Technol. 2021;125:103008. doi: 10.1016/j.trc.2021.103008. - DOI

LinkOut - more resources