Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Nov 11;5(4):ooac094.
doi: 10.1093/jamiaopen/ooac094. eCollection 2022 Dec.

ACR's Connect and AI-LAB technical framework

Affiliations

ACR's Connect and AI-LAB technical framework

Laura Brink et al. JAMIA Open. .

Abstract

Objective: To develop a free, vendor-neutral software suite, the American College of Radiology (ACR) Connect, which serves as a platform for democratizing artificial intelligence (AI) for all individuals and institutions.

Materials and methods: Among its core capabilities, ACR Connect provides educational resources; tools for dataset annotation; model building and evaluation; and an interface for collaboration and federated learning across institutions without the need to move data off hospital premises.

Results: The AI-LAB application within ACR Connect allows users to investigate AI models using their own local data while maintaining data security. The software enables non-technical users to participate in the evaluation and training of AI models as part of a larger, collaborative network.

Discussion: Advancements in AI have transformed automated quantitative analysis for medical imaging. Despite the significant progress in research, AI is currently underutilized in current clinical workflows. The success of AI model development depends critically on the synergy between physicians who can drive clinical direction, data scientists who can design effective algorithms, and the availability of high-quality datasets. ACR Connect and AI-LAB provide a way to perform external validation as well as collaborative, distributed training.

Conclusion: In order to create a collaborative AI ecosystem across clinical and technical domains, the ACR developed a platform that enables non-technical users to participate in education and model development.

Keywords: artificial intelligence; data science; machine learning; radiology; software design.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
The ACR’s AI research data ecosystem. Connect resides on-premise, providing connectivity to local clinical systems and tools for advanced local processing. A containerized “app model” allows for workflow apps that perform tasks such as preparation of data for upload (eg, de-identification) and advanced local processing (eg, AI-LAB). The local Connect node communicates in a secure manner with ACR cloud services to facilitate the exchange of data, meta data, AI models and the results of AI experiments as appropriate. Central systems, including those powered by ACR’s Data Analysis and Research Toolkit (DART), interact with local nodes and process information to enable registries, clinical trials, and distributed AI activities such as validation and federated learning. Among the many post-processing activities is the preparation of data for safe publication in public archives.
Figure 2.
Figure 2.
Technical framework of ACR Connect on-premise. Connect talks to various on-premise health IT systems like Picture Archiving and Communication System (PACS), Electronic Health Record (EHR), etc. via standard protocols like Digital Imaging and Communications in Medicine (DICOM), and HL7 Fast Healthcare Interoperability Resources (FHIR). The framework provides utilities to manage users and data. Applications run on the Connect platform like Dose Index Registry (DIR), ACR National Clinical Imaging Research Registry (ANCIRR), and AI-LAB make use of these utilities to allow users/facilities to participate in AI testing and training, clinical registries, and federated learning.
Figure 3.
Figure 3.
The workflow within AI-LAB for a user to train, test, and run AI models using their local data.
Figure 4.
Figure 4.
An example of a completed training of a breast density model on the “Create” page. The user can examine the accuracy and loss on the training and validation datasets at each epoch. In this particular training session, the model is most likely overfitting, or memorizing, the training data. The user can see evidence of the overfitting by noting that the training accuracy approaches 1 while the validation accuracy floats around 0.6. A similar trend is seen in the loss overview where the training loss approaches 0 while the validation loss actually increases. The performance testing shows the results of the new model on the hold out test dataset. Studies from the Digital Mammographic Imaging Screening Trial are used for this example.
Figure 5.
Figure 5.
Pneumonia Classification data element’s validation metrics on the “Evaluate” page. For binary classification elements, like pneumonia classification, users can adjust the binary threshold separating the positive and negative classes by using the Threshold slider at the top. A shows the metrics for a binary threshold at 0.3. B shows the metrics for a binary threshold at 0.5. On the left is the interactive confusion matrix. The user can click on a cell in the confusion matrix to view those specific studies. In the middle is the receiver operating characteristic curve. On the right are the classification metrics calculated for the entire dataset. For this particular model, a binary threshold at 0.3 is preferable to 0.5 depending on the target sensitivity/specificity. Studies from 2018 RSNA Pneumonia Detection Challenge are used for this example.
Figure 6.
Figure 6.
Pneumonia Bounding Box data element’s validation metrics on the “Evaluate” page. On the left is the scatterplot of volumes. This particular model tends to have larger predicted volumes than the ground truth volumes. In the middle is the Bland–Altman plot where the same trend of the larger predicted volumes can be seen. On the right are the bounding box metrics calculated for the entire dataset. Studies from 2018 RSNA Pneumonia Detection Challenge are used for this example.

References

    1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25 (1): 44–56. - PubMed
    1. Allen B, Agarwal S, Coombs L, Wald C, Dreyer K.. 2020 ACR Data Science Institute artificial intelligence survey. J Am Coll Radiol 2021; 18 (8): 1153–9. - PubMed
    1. Lösel PD, van de Kamp T, Jayme A, et al.Introducing Biomedisa as an open-source online platform for biomedical image segmentation. Nat Commun 2020; 11 (1): 5577. - PMC - PubMed
    1. Burkhardt J, Sharma A, Tan J, et al.N-Tools-Browser: web-based visualization of electrocorticography data for epilepsy surgery. Front Bioinform 2022; 2: 857577. - PMC - PubMed
    1. Egger J, Wild D, Weber M, et al.Studierfenster: an open science cloud-based medical imaging analysis platform. J Digit Imaging 2022; 35 (2): 340–55. - PMC - PubMed

LinkOut - more resources