Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Oct 14;23(20):8471.
doi: 10.3390/s23208471.

Object Detection in Adverse Weather for Autonomous Driving through Data Merging and YOLOv8

Affiliations

Object Detection in Adverse Weather for Autonomous Driving through Data Merging and YOLOv8

Debasis Kumar et al. Sensors (Basel). .

Abstract

For autonomous driving, perception is a primary and essential element that fundamentally deals with the insight into the ego vehicle's environment through sensors. Perception is challenging, wherein it suffers from dynamic objects and continuous environmental changes. The issue grows worse due to interrupting the quality of perception via adverse weather such as snow, rain, fog, night light, sand storms, strong daylight, etc. In this work, we have tried to improve camera-based perception accuracy, such as autonomous-driving-related object detection in adverse weather. We proposed the improvement of YOLOv8-based object detection in adverse weather through transfer learning using merged data from various harsh weather datasets. Two prosperous open-source datasets (ACDC and DAWN) and their merged dataset were used to detect primary objects on the road in harsh weather. A set of training weights was collected from training on the individual datasets, their merged versions, and several subsets of those datasets according to their characteristics. A comparison between the training weights also occurred by evaluating the detection performance on the datasets mentioned earlier and their subsets. The evaluation revealed that using custom datasets for training significantly improved the detection performance compared to the YOLOv8 base weights. Furthermore, using more images through the feature-related data merging technique steadily increased the object detection performance.

Keywords: YOLOv8; autonomous driving; data merging; deep neural networks; harsh weather; object detection.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure A1
Figure A1
Performance outcomes of YOLOv8’s base weight (‘yolov8x.pt’) (in left column) and weight trained on MERGED dataset (V5) (in right column), on the test images (ACDC, DAWN). (a) Detection by ‘yolov8x.pt’. (b) Detection by ‘MERGEDv5.pt’. Different colors are used for different types of objects (0—person (red), 2—car (orange), 5—bus (light green), 7—truck (green)).
Figure A2
Figure A2
Performance outcomes of YOLOv8’s base weight (‘yolov8x.pt’) (in left column) and weight trained on MERGED dataset (V5) (in right column) on the test images (ACDC, DAWN). Row-wise comparison (on a pair of images) between detection performance outcomes of both the weights. (a) Detection by ‘yolov8x.pt’. (b) Detection by ‘MERGEDv5.pt’. (c) Detection by ‘yolov8x.pt’. (d) Detection by ‘MERGEDv5.pt’. (e) Detection by ‘yolov8x.pt’. (f) Detection by ‘MERGEDv5.pt’.
Figure A3
Figure A3
Performance outcomes of YOLOv8’s base weights and weight trained on MERGED dataset (V5) on the test images (ACDC, DAWN). (a) Detection by ‘yolov8n.pt’. (b) Detection by ‘yolov8s.pt’. (c) Detection by ‘yolov8m.pt’. (d) Detection by ‘yolov8l.pt’. (e) Detection by ‘yolov8x.pt’. (f) Detection by ‘MERGEDv5.pt’.
Figure A3
Figure A3
Performance outcomes of YOLOv8’s base weights and weight trained on MERGED dataset (V5) on the test images (ACDC, DAWN). (a) Detection by ‘yolov8n.pt’. (b) Detection by ‘yolov8s.pt’. (c) Detection by ‘yolov8m.pt’. (d) Detection by ‘yolov8l.pt’. (e) Detection by ‘yolov8x.pt’. (f) Detection by ‘MERGEDv5.pt’.
Figure A4
Figure A4
Performance outcomes of YOLOv8’s base weights and weight trained on MERGED dataset (V5) on the test images (ACDC, DAWN). Different colors are used for different types of objects (2—car (orange), 7—truck (green)). (a) Detection by ‘yolov8n.pt’. (b) Detection by ‘yolov8s.pt’. (c) Detection by ‘yolov8m.pt’. (d) Detection by ‘yolov8l.pt’. (e) Detection by ‘yolov8x.pt’. (f) Detection by ‘MERGEDv5.pt’.
Figure A4
Figure A4
Performance outcomes of YOLOv8’s base weights and weight trained on MERGED dataset (V5) on the test images (ACDC, DAWN). Different colors are used for different types of objects (2—car (orange), 7—truck (green)). (a) Detection by ‘yolov8n.pt’. (b) Detection by ‘yolov8s.pt’. (c) Detection by ‘yolov8m.pt’. (d) Detection by ‘yolov8l.pt’. (e) Detection by ‘yolov8x.pt’. (f) Detection by ‘MERGEDv5.pt’.
Figure A5
Figure A5
Performance outcomes of YOLOv8’s base weights and weight trained on MERGED dataset (V5) on the test images (ACDC, DAWN). (a) Detection by ‘yolov8n.pt’. (b) Detection by ‘yolov8s.pt’. (c) Detection by ‘yolov8m.pt’. (d) Detection by ‘yolov8l.pt’. (e) Detection by ‘yolov8x.pt’. (f) Detection by ‘MERGEDv5.pt’.
Figure A6
Figure A6
Some missed detections (test images from the ACDC and DAWN datasets) by our trained weight (MERGEDv5.pt) (except (a) that is detected by ‘yolov8x.pt’). Different colors are used for different types of objects (0—person (red), 2—car (orange), 6—train (olive), 7—truck (green), 9—traffic light (ocean blue)).
Figure 1
Figure 1
Example images from the ACDC and DAWN datasets. (a) Fog, night, rain, and snow images from the ACDC dataset (columnwise, respectively). (b) Fog, rain, sand, and snow images from the DAWN dataset (columnwise, respectively).
Figure 1
Figure 1
Example images from the ACDC and DAWN datasets. (a) Fog, night, rain, and snow images from the ACDC dataset (columnwise, respectively). (b) Fog, rain, sand, and snow images from the DAWN dataset (columnwise, respectively).
Figure 2
Figure 2
Object detection model using the YOLOv8 algorithm.
Figure 3
Figure 3
Performance of YOLOv8’s default weights on valid and test images of the ‘MERGED’ dataset. (a) Performance on the validation images. (b) Performance on the test images.
Figure 4
Figure 4
Detection performance outcomes of different versions of image augmentation.
Figure 5
Figure 5
Performance of training weights using the DAWN, ACDC, and MERGED datasets on their corresponding test images. (a) DAWN. (b) ACDC. (c) MERGED. (d) All together.
Figure 6
Figure 6
Performance of training on the MERGED dataset, (a) the DAWN dataset on DAWN test images, and (b) the ACDC dataset on ACDC test images.
Figure 7
Figure 7
Precision–recall (PR) curves of various training weights on the fog test data. (a) “DAWN fog” on “DAWN fog” test data. (b) “ACDC fog” on “ACDC fog” test data. (c) “merged fog” on “DAWN fog” test data. (d) “merged fog” on “ACDC fog” test data. (e) “DAWN” on “DAWN fog” test data. (f) “ACDC” on “ACDC fog” test data. (g) “MERGED” on “DAWN fog” test data. (h) “MERGED” on “ACDC fog” test data.
Figure 8
Figure 8
Gradual improvement of the object detection results by incorporating more images through the feature-related data merging technique. (a) DAWN. (b) ACDC.
Figure 9
Figure 9
Limitations of accuracy. After adding two images, accuracy elevated between (a,c) and dropped between (b,d).

Similar articles

Cited by

References

    1. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016B) 2018. [(accessed on 10 October 2023)]. Available online: https://www.sae.org/standards/content/j3016_201806/
    1. Zhang Y., Carballo A., Yang H., Takeda K. Perception and Sensing for Autonomous Vehicles under Adverse Weather Conditions: A Survey. ISPRS J. Photogramm. Remote Sens. 2023;196:146–177. doi: 10.1016/j.isprsjprs.2022.12.021. - DOI
    1. Bijelic M., Gruber T., Mannan F., Kraus F., Ritter W., Dietmayer K., Heide F. Seeing through Fog without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather; Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Seattle, WA, USA. 13–19 June 2020; New York, NY, USA: IEEE; 2020.
    1. Lee U., Jung J., Shin S., Jeong Y., Park K., Shim D.H., Kweon I.S. EureCar Turbo: A Self-Driving Car That Can Handle Adverse Weather Conditions; Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Daejeon, Republic of Korea. 9–14 October 2016; New York, NY, USA: IEEE; 2016.
    1. Qian K., Zhu S., Zhang X., Li L.E. Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals; Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Nashville, TN, USA. 20–25 June 2021; New York, NY, USA: IEEE; 2021.

LinkOut - more resources