Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Apr 22;22(9):3233.
doi: 10.3390/s22093233.

Context-Aware Multi-Scale Aggregation Network for Congested Crowd Counting

Affiliations

Context-Aware Multi-Scale Aggregation Network for Congested Crowd Counting

Liangjun Huang et al. Sensors (Basel). .

Abstract

In this paper, we propose a context-aware multi-scale aggregation network named CMSNet for dense crowd counting, which effectively uses contextual information and multi-scale information to conduct crowd density estimation. To achieve this, a context-aware multi-scale aggregation module (CMSM) is designed. Specifically, CMSM consists of a multi-scale aggregation module (MSAM) and a context-aware module (CAM). The MSAM is used to obtain multi-scale crowd features. The CAM is used to enhance the extracted multi-scale crowd feature with more context information to efficiently recognize crowds. We conduct extensive experiments on three challenging datasets, i.e., ShanghaiTech, UCF_CC_50, and UCF-QNRF, and the results showed that our model yielded compelling performance against the other state-of-the-art methods, which demonstrate the effectiveness of our method for congested crowd counting.

Keywords: convolutional neural network; dense crowd counting; multi-scale feature learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Visual results of different crowd scenes. Image (a) depicts the scale of the distance from the camera. Image (b) shows the head-blocking phenomenon caused by crowd congestion. Images (c,d) show the effects of objects resembling a human head in the background.
Figure 2
Figure 2
Overview of CMSNet. An image is received by the encoder and converted into deep features, and then the deep features are decoded by the decoder into a density map for crowd counting. The decoder is composed of three context-aware multi-scale aggregation modules (CMSMs), and the CMSMs are connected in parallel by a CAM and an MSAM to generate feature maps at each channel level.
Figure 3
Figure 3
Visual results of the MSAM with different dilation rates on SHHA. From left to right: input image, ground truth, {1,2,3,4}, {1,2,3,5}, {1,2,3,8}, {1,4,7,9}, {1,2,3,6}.
Figure 4
Figure 4
Visual results of effect of the GRF. From left to right: input image, ground truth, w/o GRF and w. GRF.
Figure 5
Figure 5
Ablation study of the CAM on SHHA. From left to right: input image, ground truth, result of w/o CAM, and result of w. CAM.
Figure 6
Figure 6
Ablation study of the MSAM on SHHA. From left to right: input image, ground truth, result of w/o MSAM, and result of w. MSAM.

Similar articles

References

    1. Yu Y., Huang J., Du W., Xiong N. Design and analysis of a lightweight context fusion CNN scheme for crowd counting. Sensors. 2019;19:2013. doi: 10.3390/s19092013. - DOI - PMC - PubMed
    1. Ilyas N., Lee B., Kim K. HADF-crowd: A hierarchical attention-based dense feature extraction network for single-image crowd counting. Sensors. 2021;21:3483. doi: 10.3390/s21103483. - DOI - PMC - PubMed
    1. Tong M., Fan L., Nan H., Zhao Y. Smart camera aware crowd counting via multiple task fractional stride deep learning. Sensors. 2019;19:1346. doi: 10.3390/s19061346. - DOI - PMC - PubMed
    1. Zhang Y., Zhao H., Duan Z., Huang L., Deng J., Zhang Q. Congested Crowd Counting via Adaptive Multi-Scale Context Learning. Sensors. 2021;21:3777. doi: 10.3390/s21113777. - DOI - PMC - PubMed
    1. Csönde G., Sekimoto Y., Kashiyama T. Crowd counting with semantic scene segmentation in helicopter footage. Sensors. 2020;20:4855. doi: 10.3390/s20174855. - DOI - PMC - PubMed

LinkOut - more resources