Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 May 31:2018:4149103.
doi: 10.1155/2018/4149103. eCollection 2018.

A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks

Affiliations

A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks

Fangzhao Li et al. Comput Intell Neurosci. .

Erratum in

Abstract

Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment.

PubMed Disclaimer

Figures

Figure 1
Figure 1
One of our images. Compared with the images shown in paper [10], the backgrounds of our images are more complex.
Figure 2
Figure 2
The architecture of our composite model. Raw images are preprocessed by skin with wound detection algorithm to remove the environmental backgrounds. Then, the training data composed of the preprocessed images and the raw images are normalized, cropped, and deformed. The DNN is trained to segment the testing data. At last, the segmented results are corrected semantically.
Figure 3
Figure 3
The green line indicates the wound, the red line indicates a similar background, and such a background can be easily misjudged as a wound. If we delete it before segmentation, we will simplify the task.
Figure 4
Figure 4
RGB color space of an image and its Cr channel in the YCbCr color space.
Figure 5
Figure 5
Skin detection with fixed threshold of Cr channel in YCbCr color space.
Figure 6
Figure 6
Algorithm flow of skin with wound detection.
Figure 7
Figure 7
Cr histogram of an image without complex backgrounds, the top right corner is the image.
Figure 8
Figure 8
Skin detection with dynamic threshold of Cr channel in YCbCr color space.
Figure 9
Figure 9
Result of the skin with wound detection.
Figure 10
Figure 10
The relationship between the skin and the wound.
Figure 11
Figure 11
Schematic of the relabeled regions in the skin with wound detection.
Figure 12
Figure 12
Image deformation and cropping. Left is the raw image; the deformed images are showed in middle. Right is the clipping results of deformed images.
Figure 13
Figure 13
Schematic of our deep neural networks framework. In the figure, the white box represents the convolutional layers, the green arrow represents the upsampling, and the blue arrow represents the fusion of the data. The red number below each layer represents the number of output feature channels.
Figure 14
Figure 14
Wound images and labeled images in our data.
Figure 15
Figure 15
(a) The schematic of the foreground and background marking. The red curve is the foreground marker, and the green curve is the background marker. (b) Results of applying the watershed algorithm based on the marked gradient map of the image. Red region is the wound. The interactive interface of the software is shown in (a). The labeling staff performs rough marking on this interface, and the marked curve can also be corrected repeatedly.
Figure 16
Figure 16
Test results for different models. The abscissa is the number of steps in training and the ordinate is the IoU obtained by testing the test data. One step is the process of dealing with a minibatch. RawIoU-100 represents the IoU of the original networks with DM = 100%. PreIoU-100 represents the IoU of the pretreatment model with DM = 100%. PreAndPostIoU-100 represents the IoU of the complete model with DM = 100%.
Figure 17
Figure 17
Results show. Left is ground truth and right is the results of our model.
Algorithm 1
Algorithm 1
Determine the dynamic thresholds.

Similar articles

Cited by

References

    1. Bhandari A. K., Kumar A., Chaudhary S., Singh G. K. A novel color image multilevel thresholding based segmentation using nature inspired optimization algorithms. Expert Systems with Applications. 2016;63:112–133. doi: 10.1016/j.eswa.2016.06.044. - DOI
    1. Veredas F., Mesa H., Morente L. Binary tissue classification on wound images with neural networks and bayesian classifiers. IEEE Transactions on Medical Imaging. 2010;29(2):410–427. doi: 10.1109/TMI.2009.2033595. - DOI - PubMed
    1. Yadav M. K., Manohar D. D., Mukherjee G., Chakraborty C. Segmentation of chronic wound areas by clustering techniques using selected color space. Journal of Medical Imaging and Health Informatics. 2013;3(1):22–29. doi: 10.1166/jmihi.2013.1124. - DOI
    1. Long J., Shelhamer E., Darrell T. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '15); June 2015; Boston, Mass, USA. IEEE; pp. 3431–3440. - DOI - PubMed
    1. Wang C., Yan X., Smith X., et al. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks. Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '15); August 2015; Milan, Italy. pp. 2415–2418. - DOI - PubMed

LinkOut - more resources