A Lightweight Human Fall Detection Network
- PMID: 38005456
- PMCID: PMC10674212
- DOI: 10.3390/s23229069
A Lightweight Human Fall Detection Network
Abstract
The rising issue of an aging population has intensified the focus on the health concerns of the elderly. Among these concerns, falls have emerged as a predominant health threat for this demographic. The YOLOv5 family represents the forefront of techniques for human fall detection. However, this algorithm, although advanced, grapples with issues such as computational demands, challenges in hardware integration, and vulnerability to occlusions in the designated target group. To address these limitations, we introduce a pioneering lightweight approach named CGNS-YOLO for human fall detection. Our method incorporates both the GSConv module and the GDCN module to reconfigure the neck network of YOLOv5s. The objective behind this modification is to diminish the model size, curtail floating-point computations during feature channel fusion, and bolster feature extraction efficacy, thereby enhancing hardware adaptability. We also integrate a normalization-based attention module (NAM) into the framework, which concentrates on salient fall-related data and deemphasizes less pertinent information. This strategic refinement augments the algorithm's precision. By embedding the SCYLLA Intersection over Union (SIoU) loss function, our model benefits from faster convergence and heightened detection precision. We evaluated our model using the Multicam dataset and the Le2i Fall Detection dataset. Our findings indicate a 1.2% enhancement in detection accuracy compared with the conventional YOLOv5s framework. Notably, our model realized a 20.3% decrease in parameter tally and a 29.6% drop in floating-point operations. A comprehensive instance analysis and comparative assessments underscore the method's superiority and efficacy.
Keywords: GDCN module; GSConv module; NAM; SIoU; YOLOv5; fall detection.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
References
-
- Girshick R. Fast R-CNN; Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV); Santiago, Chile. 7–13 December 2015; New York, NY, USA: IEEE; 2015. pp. 1440–1448.
-
- Dai J., Li Y., He K., Sun J. R-fcn: Object detection via region-based fully convolutional networks. Adv. Neural Inf. Process. Syst. 2016;29
-
- He K., Gkioxari G., Dollár P., Girshick R. Mask R-CNN; Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV); Venice, Italy. 22–29 October 2017; New York, NY, USA: IEEE; 2017. pp. 2961–2969.
-
- Cai Z., Vasconcelos N. Cascade R-CNN: Delving into high quality object detection; Proceedings of the 2018 IEEE/CVF Conference on Computer Vision And Pattern Recognition; Salt Lake City, UT, USA. 18–23 June 2018; New York, NY, USA: IEEE; 2018. pp. 6154–6162.
MeSH terms
Grants and funding
LinkOut - more resources
Full Text Sources
Medical
