Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Dec;33(6):1514-1526.
doi: 10.1007/s10278-020-00370-w.

DicomAnnotator: a Configurable Open-Source Software Program for Efficient DICOM Image Annotation

Affiliations

DicomAnnotator: a Configurable Open-Source Software Program for Efficient DICOM Image Annotation

Qifei Dong et al. J Digit Imaging. 2020 Dec.

Abstract

Modern, supervised machine learning approaches to medical image classification, image segmentation, and object detection usually require many annotated images. As manual annotation is usually labor-intensive and time-consuming, a well-designed software program can aid and expedite the annotation process. Ideally, this program should be configurable for various annotation tasks, enable efficient placement of several types of annotations on an image or a region of an image, attribute annotations to individual annotators, and be able to display Digital Imaging and Communications in Medicine (DICOM)-formatted images. No current open-source software program fulfills these requirements. To fill this gap, we developed DicomAnnotator, a configurable open-source software program for DICOM image annotation. This program fulfills the above requirements and provides user-friendly features to aid the annotation process. In this paper, we present the design and implementation of DicomAnnotator. Using spine image annotation as a test case, our evaluation showed that annotators with various backgrounds can use DicomAnnotator to annotate DICOM images efficiently. DicomAnnotator is freely available at https://github.com/UW-CLEAR-Center/DICOM-Annotator under the GPLv3 license.

Keywords: DICOM; Image annotation; Machine learning; Open source; Software design.

PubMed Disclaimer

Conflict of interest statement

Qifei Dong reports grants from NIH/NIAMS, during the conduct of the study.

Dr. Luo reports grants from NIH/NIAMS, during the conduct of the study.

Dr. Haynor reports grants from NIH/NIAMS, during the conduct of the study.

Dr. Linnau reports grants from Siemens Healthineers, personal fees from Siemens Healthineers, and other from Cambridge Press, outside the submitted work.

Dr. Jarvik reports grants from NIH/NIAMS, during the conduct of the study, and Springer Publishing: Royalties as a book co-editor; GE-Association of University Radiologists Radiology Research Academic Fellowship (GERRAF): travel reimbursement for Faculty Board of Review; and Wolters Kluwer/UpToDate: Royalties as a chapter author.

Dr. Cross reports grants from NIH/NIAMS, during the conduct of the study, personal fees from Philips Medical, and other from GE Medical, outside the submitted work.

All other authors report no conflict of interest.

Figures

Fig. 1
Fig. 1
Our approach to annotating multiple regions of interest in an image. The subfigures are the following: a the “L5” region identifier is selected, b the annotator provides a bounding polygon for L5 and then inputs the labels for L5 (no hardware, fractured), c the “L4” identifier is automatically selected after completing the last labels for L5, and d the annotator provides a bounding polygon for L4 and then inputs the labels for L4 (no hardware, normal). This process continues until image annotation is complete
Fig. 2
Fig. 2
The steps to using DicomAnnotator to annotate an image dataset
Fig. 3
Fig. 3
The login page demonstrating the username field, a dropdown to select a classification system, and radio buttons to determine the order of region identifiers shown on DicomAnnotator’s main page
Fig. 4
Fig. 4
The main page of DicomAnnotator that is divided into 12 panels. Panel 1: radio buttons used to select an operation mode; panel 2: a group of buttons that allow the user to move between images, remove annotations, reset the image display, manually save annotations, and display help text; panel 3: text box showing details about the currently displayed image and the annotation process; panel 4: buttons used to set an image to unreadable when it is of low or non-diagnostic quality and to horizontally flip the image; panel 5: buttons used to flag/unflag an image for later review and to navigate through the flagged images; panel 6: commenting system where comments from any user are displayed and new comments can be added; panel 7: indicator of whether new annotations have been stored in the result file; panel 8: canvas displaying an image; panel 9: radio buttons for assigning an image label; panel 10: annotation table which has been configured to apply multiple annotations to each region of interest; panel 11: buttons used to toggle off the annotated points in the image and to invert the image’s grayscale; and panel 12: text boxes showing the identifiers of the regions that are not assigned the default region label like “Normal”
Fig. 5
Fig. 5
The user’s interaction with the image in the annotation process. a The target vertebral body for annotation is identified by the user using the default display parameters. b The display window and level, zooming, and panning are employed to optimize visualization of the target vertebral body. c The boundary of the target vertebral body is marked by placing points at its four corners
Fig. 6
Fig. 6
The final annotations of an example spine image
Fig. 7
Fig. 7
The main page in the “View Only” mode demonstrating the annotations with the manipulation tools grayed out to prevent accidental alteration of the annotations. The user can return to the “Edit” mode by clicking the radio button in the upper left
Fig. 8
Fig. 8
The confirmation dialog that is displayed when switching from the “View Only” mode to the “Edit” mode
Fig. 9
Fig. 9
An example of comments displayed in the comment panel for an image
Fig. 10
Fig. 10
The usability evaluation process
Fig. 11
Fig. 11
The configuration file for the spine CT image annotation task in JSON format
Fig. 12
Fig. 12
DicomAnnotator demonstrating display and annotations of an example sagittal lumbar spine CT image

References

    1. Sa R, Owens W, Wiegand R, Studin M, Capoferri D, Barooha K, Greaux A, Rattray R, Hutton A, Cintineo J, Chaudhary V: Intervertebral disc detection in X-ray images using faster R-CNN. In: Proceedings of International Conference of the IEEE Engineering in Medicine and Biology Society, pp 564-567, 2017 - PubMed
    1. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–118. doi: 10.1038/nature21056. - DOI - PMC - PubMed
    1. Labelbox. The leading training data solution. Available at https://labelbox.com. Accessed 18 June 2019.
    1. Dataturks. Best online platform for your ML data annotation needs. Available at https://dataturks.com. Accessed 18 June 2019.
    1. Pianykh OS. Digital Imaging and Communications in Medicine (DICOM): a Practical Introduction and Survival Guide. 2. Berlin: Springer; 2012.

Publication types

LinkOut - more resources