Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Nov 10;22(22):8691.
doi: 10.3390/s22228691.

TSD-Truncated Structurally Aware Distance for Small Pest Object Detection

Affiliations

TSD-Truncated Structurally Aware Distance for Small Pest Object Detection

Xiaowen Huang et al. Sensors (Basel). .

Abstract

As deep learning has been successfully applied in various domains, it has recently received considerable research attention for decades, making it possible to efficiently and intelligently detect crop pests. Nevertheless, the detection of pest objects is still challenging due to the lack of discriminative features and pests' aggregation behavior. Recently, intersection over union (IoU)-based object detection has attracted much attention and become the most widely used metric. However, it is sensitive to small-object localization bias; furthermore, IoU-based loss only works when ground truths and predicted bounding boxes are intersected, and it lacks an awareness of different geometrical structures. Therefore, we propose a simple and effective metric and a loss function based on this new metric, truncated structurally aware distance (TSD). Firstly, the distance between two bounding boxes is defined as the standardized Chebyshev distance. We also propose a new regression loss function, truncated structurally aware distance loss, which consider the different geometrical structure relationships between two bounding boxes and whose truncated function is designed to impose different penalties. To further test the effectiveness of our method, we apply it on the Pest24 small-object pest dataset, and the results show that the mAP is 5.0% higher than other detection methods.

Keywords: faster R-CNN; pest detection; small object detection; truncated structurally aware distance; truncated structurally aware loss.

PubMed Disclaimer

Conflict of interest statement

There is no conflict of interest.

Figures

Figure 1
Figure 1
The graphical description of different metrics. Box A denotes the ground truth; box B denotes the predicted bounding box. v=4π2(arctanwAhAarctanwBhB)2, α=v(1IoU)+v.
Figure 2
Figure 2
A comparison between IoU metrics and TSD metrics on different positional relationships. (a,b) represent the containment relationship, and (c,d) represent the separation relationship.
Figure 3
Figure 3
A comparison between IoU-deviation curve and TSD-deviation curve in four different scenarios. The abscissa value denotes the number of pixels of deviation between the center points of A and B, and the ordinate value denotes the corresponding metric value. On the left is the IoU-deviation curve, and on the right is the TSD-deviation curve.
Figure 4
Figure 4
The graphical description of each stage loss in the TSD Loss function.
Figure 5
Figure 5
Network architecture. We use ResNet-50-FPN as the backbone of the two-stage network; the proposed TSD and TSDloss has been used in the RPN.
Figure 6
Figure 6
The samples of some ground truth boxes were labeled in the dataset, but not all the instances of the data are labeled in the dataset. There are many omissions in each image, resulting in a low MAP value.
Figure 7
Figure 7
AP (IoU = 0.5) for each category on Pest24 test set.
Figure 8
Figure 8
The AP curve (left) and training loss curve(right) on Pest24 dataset; they represent the results of different detectors (the proposed method is marked with *).
Figure 9
Figure 9
The visualization results based on IoU metrics and TSD metrics. The first row represents the ground truth of the dataset, and the second row represents the results under IoU, while the third row represents the results under the proposed approach.

Similar articles

Cited by

References

    1. Ku J., Mozifian M., Lee J., Harakeh A., Waslander S.L. Joint 3d proposal generation and object detection from view aggregation; Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Madrid, Spain. 1–5 October 2018; pp. 1–8.
    1. Zhou Y., Wen S., Wang D., Mu J., Richard I. Object Detection in Autonomous Driving Scenarios Based on an Improved Faster-RCNN. Appl. Sci. 2021;11:11630. doi: 10.3390/app112411630. - DOI
    1. He Y., Liu Z. A feature fusion method to improve the driving obstacle detection under foggy weather. IEEE Trans. Transp. Electrif. 2021;7:2505–2515. doi: 10.1109/TTE.2021.3080690. - DOI
    1. Li F., Xi Q. DefectNet: Toward fast and effective defect detection. IEEE Trans. Instrum. Meas. 2021;70:1–9. doi: 10.1109/TIM.2021.3067221. - DOI - PubMed
    1. Zeng N., Wu P., Wang Z., Li H., Liu W., Liu X. A small-sized object detection oriented multi-scale feature fusion approach with application to defect detection. IEEE Trans. Instrum. Meas. 2022;71:1–14. doi: 10.1109/TIM.2022.3153997. - DOI