Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jun 22:4.
doi: 10.30953/bhty.v4.176. eCollection 2021.

MarkIt: A Collaborative Artificial Intelligence Annotation Platform Leveraging Blockchain For Medical Imaging Research

Affiliations

MarkIt: A Collaborative Artificial Intelligence Annotation Platform Leveraging Blockchain For Medical Imaging Research

Jan Witowski et al. Blockchain Healthc Today. .

Abstract

Current research on medical image processing relies heavily on the amount and quality of input data. Specifically, supervised machine learning methods require well-annotated datasets. A lack of annotation tools limits the potential to achieve high-volume processing and scaled systems with a proper reward mechanism. We developed MarkIt, a web-based tool, for collaborative annotation of medical imaging data with artificial intelligence and blockchain technologies. Our platform handles both Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images, and allows users to annotate them for classification and object detection tasks in an efficient manner. MarkIt can accelerate the annotation process and keep track of user activities to calculate a fair reward. A proof-of-concept experiment was conducted with three fellowship-trained radiologists, each of whom annotated 1,000 chest X-ray studies for multi-label classification. We calculated the inter-rater agreement and estimated the value of the dataset to distribute the reward for annotators using a crypto currency. We hypothesize that MarkIt allows the typically arduous annotation task to become more efficient. In addition, MarkIt can serve as a platform to evaluate the value of data and trade the annotation results in a more scalable manner in the future. The platform is publicly available for testing on https://markit.mgh.harvard.edu.

Keywords: artificial intelligence; blockchain; data annotation; learning from crowds; rewarding system.

PubMed Disclaimer

Conflict of interest statement

The authors declare no potential conflicts of interest.

Figures

Fig. 1
Fig. 1
High-level data flow of MarkIt. Blockchain ledger storage and access are separate from the regular database. Artificial intelligence interface allows to train new models based on gathered annotations and make annotation suggestions to speed up the workflow.
Fig. 2
Fig. 2
Main module for image annotation combining basic DICOM viewer features (i.e. change brightness or contrast, zooming, etc.), displaying radiological reports and annotation tools (above the X-ray image). Annotators can determine their confidence with regard to each label (blue bars on the right) and preview annotators by other team members (blue and red rectangles).
Fig. 3
Fig. 3
Various stakeholders and their roles in managing large projects for scalable medical image datasets. (a) The platform described in this study facilitates workflow for all parties, maximizing their focus on a single part of the process, project managers defining the project and managing access levels, data owners on image upload, and annotators on labeling. (b) Project managers can coordinate projects by specifying labels in accordance with planned AI tasks, controlling visibility for all users, as well as granting and revoking permissions for annotators (c) Data owners can upload images with additional options for choosing desired data storage systems and file naming conventions. (d) Project managers can also export project-related data, including annotations by all team members and information about the time spent on labeling by users.
Fig. 4
Fig. 4
Time distribution of each label and annotators. Annotator C spent the shortest time among the annotators for all labels, except pneumothorax. The pneumothorax labeling requires the longest time, in general, likely due to the use of ancillary tools such as zoom to view the pleural line, compared with the cardiomegaly cases requiring the shortest time.
Fig. 5
Fig. 5
Annotation evaluation sheet.
Algorithm 1
Algorithm 1
An algorithm for calculating reward factors.

References

    1. Abaho M, Bollegala D, Williamson P, Dodd S. Correcting crowdsourced annotations to improve detection of outcome types in evidence based medicine. CEUR Workshop Proc [Internet]. 2019. [cited 10 March 2021]. Available from: https://livrepository.liverpool.ac.uk/3047267
    1. Greenspan H, Van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 2016; 35(5): 1153–9. doi: 10.1109/TMI.2016.2553401 - DOI
    1. Albarqouni S, Baur C, Achilles F, Belagiannis V, Demirci S, Navab N. AggNet: deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans Med Imaging 2016; 35(5): 1313–21. doi: 10.1109/TMI.2016.2528120 - DOI - PubMed
    1. Abujudeh HH, Boland GW, Kaewlai R, Rabiner P, Halpern EF, Gazelle GS, et al. Abdominal and pelvic computed tomography (CT) interpretation: discrepancy rates among experienced radiologists. Eur Radiol 2010; 20(8): 1952–7. doi: 10.1007/s00330-010-1763-1 - DOI - PubMed
    1. Raykar VC, Yu S, Zhao LH, Jerebko A, Florin C, Valadez GH, et al. Supervised learning from multiple experts: whom to trust when everyone lies a bit. Proc Int Conf Mach Learn [Internet]. 2009. [cited 10 March 2021]. Available from: http://portal.acm.org/citation.cfm?doid=1553374.1553488

LinkOut - more resources