Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Jul;18(5):752-71.
doi: 10.1016/j.media.2014.04.003. Epub 2014 Apr 24.

Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images

Affiliations

Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images

Jayaram K Udupa et al. Med Image Anal. 2014 Jul.

Abstract

To make Quantitative Radiology (QR) a reality in radiological practice, computerized body-wide Automatic Anatomy Recognition (AAR) becomes essential. With the goal of building a general AAR system that is not tied to any specific organ system, body region, or image modality, this paper presents an AAR methodology for localizing and delineating all major organs in different body regions based on fuzzy modeling ideas and a tight integration of fuzzy models with an Iterative Relative Fuzzy Connectedness (IRFC) delineation algorithm. The methodology consists of five main steps: (a) gathering image data for both building models and testing the AAR algorithms from patient image sets existing in our health system; (b) formulating precise definitions of each body region and organ and delineating them following these definitions; (c) building hierarchical fuzzy anatomy models of organs for each body region; (d) recognizing and locating organs in given images by employing the hierarchical models; and (e) delineating the organs following the hierarchy. In Step (c), we explicitly encode object size and positional relationships into the hierarchy and subsequently exploit this information in object recognition in Step (d) and delineation in Step (e). Modality-independent and dependent aspects are carefully separated in model encoding. At the model building stage, a learning process is carried out for rehearsing an optimal threshold-based object recognition method. The recognition process in Step (d) starts from large, well-defined objects and proceeds down the hierarchy in a global to local manner. A fuzzy model-based version of the IRFC algorithm is created by naturally integrating the fuzzy model constraints into the delineation algorithm. The AAR system is tested on three body regions - thorax (on CT), abdomen (on CT and MRI), and neck (on MRI and CT) - involving a total of over 35 organs and 130 data sets (the total used for model building and testing). The training and testing data sets are divided into equal size in all cases except for the neck. Overall the AAR method achieves a mean accuracy of about 2 voxels in localizing non-sparse blob-like objects and most sparse tubular objects. The delineation accuracy in terms of mean false positive and negative volume fractions is 2% and 8%, respectively, for non-sparse objects, and 5% and 15%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 0.9 and 1.5 voxels, respectively. Some sparse objects - venous system (in the thorax on CT), inferior vena cava (in the abdomen on CT), and mandible and naso-pharynx (in neck on MRI, but not on CT) - pose challenges at all levels, leading to poor recognition and/or delineation results. The AAR method fares quite favorably when compared with methods from the recent literature for liver, kidneys, and spleen on CT images. We conclude that separation of modality-independent from dependent aspects, organization of objects in a hierarchy, encoding of object relationship information explicitly into the hierarchy, optimal threshold-based recognition learning, and fuzzy model-based IRFC are effective concepts which allowed us to demonstrate the feasibility of a general AAR system that works in different body regions on a variety of organs and on different modalities.

Keywords: Anatomy modeling; Fuzzy connectedness; Fuzzy models; Image segmentation; Object recognition.

PubMed Disclaimer

Figures

Figure 1
Figure 1
A schematic representation of the AAR schema. The three main steps of model building, object recognition, and object delineation are explained in Sections 2, 3, and 4.
Figure 2
Figure 2
(a) Hierarchy for whole body WB. (b) Hierarchy for Thorax. TSkn: Outer boundary of thoracic skin as an object; RS: Respiratory System; TSk: Thoracic Skeleton; IMS: Internal Mediastinum; RPS, LPS: Right & Left Pleural Spaces; TB: Trachea & Bronchi; E: Esophagus; PC: Pericardium; AS, VS: Arterial & Venous Systems. (c) Hierarchy for Abdomen. ASkn: Outer boundary of abdominal skin; ASk: Abdominal Skeleton; Lvr: Liver; ASTs: Abdominal Soft Tissues; SAT & VAT: Subcutaneous and Visceral Adipose Tissues; Kd: Kidneys; Spl: Spleen; Msl: Muscle; AIA: Aorta and Iliac arteries; IVC: Inferior Vena Cava; RKd & LKd: Right and Left Kidneys. (d) Hierarchy for Neck. NSkn: Outer boundary of skin in neck; A&B: Air & Bone; FP: Fat Pad; NSTs: Soft Tissues in neck; Mnd: Mandible; Phrx: Pharynx; Tnsl: Tonsils; Tng: Tongue; SP: Soft Palate; Ad: Adenoid; NP & OP: Nasopharynx and Oropharynx; RT & LT: Right and Left Tonsils.
Figure 3
Figure 3
Organs from one training set for each body region are displayed via surface rendering. For each row, objects in one picture are listed as {..}. Top row: Thorax. 3rd picture: {RPS, TB, E, AS, VS, PC}. Middle row: Abdomen. 3rd picture: {Ask, Lvr, LKd, IVC, AIA, Spl, SAT, Msl}. Bottom row: Neck. 5th picture: {Mnd, Tng, NP, OP, Ad, FP, Tnsl}.
Figure 4
Figure 4
Volume renditions of fuzzy models of objects in different combinations for the three body regions. For each row, objects in one picture are listed as {..}. Top row: Thorax. 5th picture: {LPS, AS, TB}. Middle row: Abdomen. 3rd picture: {ASk, Lvr, LKd, RKd, AIA, IVC, Spl}. Bottom row: Neck: 5th picture: {Mnd, Tng, NP, OP, Ad, FP}.
Figure 5
Figure 5
Volume renditions of fuzzy models created without (Rows 1 and 3) and with (Rows 2 and 4) orientation alignment for several non-sparse (Rows 1 and 2) and sparse (Rows 3 and 4) objects. Row 1: PC, RPS, LKd, Lvr. Row 3: AS, E, AIA, IVC, TB.
Figure 6
Figure 6
Sample recognition results for Thorax for the alignment strategy shown in (10). Cross sections of the model are shown overlaid on test image slices. Left to right: TSkn, TSk, LPS, TB, RPS, E, PC, AS, VS.
Figure 7
Figure 7
Sample recognition results for Abdomen for the alignment strategy shown in (10). Cross sections of the model are shown overlaid on test image slices. Left to right: ASkn, ASk, SAT, Lvr, RKd, LKd, Spl, Msl, AIA, IVC.
Figure 8
Figure 8
Sample recognition results for Neck for the alignment strategy shown in (10). Cross sections of the model are shown overlaid on test image slices. Left to right: NSkn, FP, Mnd, NP (note that NP is a combination of nasal cavity and nasopharynx), Ad, OP, RT, LT, Tng, SP.
Figure 9
Figure 9
The hierarchy used (left) and sample recognition results for DS4 (right) with model cross section overlaid on test image slices for ASkn and SAT.
Figure 10
Figure 10
Sample delineation results for Thorax. Left to Right: TSkn, IMS, LPS, AS, RPS, PC, TB, E.
Figure 11
Figure 11
Sample delineation results for Abdomen. Left to Right: ASkn, SAT, Lvr, SAT, RKd, LKd, Spl, Msl, AIA.
Figure 12
Figure 12
Sample delineation results for Neck. Left to Right: NSkn, FP, NP, OP, RT, LT, Tng, SP, Ad.
Figure 13
Figure 13
Sample delineation results for DS4. ASkn (left) and SAT (right).

References

    1. Arens R, Sin S, Nandalike K, Rieder J, Khan UI, Freeman K, Wylie-Rosett J, Lipton ML, Wootton DM, McDonough JM, Shifteh K. Upper Airway Structure and Body Fat Composition in Obese Children with Obstructive Sleep Apnea Syndrome. American Journal of Respiratory and Critical Care Medicine. 2011;183:782–787. - PMC - PubMed
    1. Ashburner J, Friston KJ. Computing average shaped tissue probability templates. Neuroimage. 2009;45:333–341. - PubMed
    1. Baiker M, Milles J, Dijkstra J, Henning TD, Weber AW, Que I, Kaijzel EL, Lowik CWGM, Reiber JHC, Lelieveldt BPF. Atlas-based whole-body segmentation of mice from low-contrast Micro-CT data. Medical Image Analysis. 2010;14:723–737. - PubMed
    1. Beucher S. The Watershed Transformation applied to image segmentation. 10th Pfefferkorn Conference on Signal and Image Processing in Microscopy and Microanalysis; 1992. pp. 299–314.
    1. Bogovic JA, Prince JL, Bazin PL. A multiple object geometric deformable model for image segmentation. Computer Vision and Image Understanding. 2013;117:145–157. - PMC - PubMed

Publication types

MeSH terms