Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2026 Jan 11;16(2):232.
doi: 10.3390/diagnostics16020232.

Diagnostic Performance of ChatGPT-5 for Detecting Pediatric Pneumothorax on Chest Radiographs: A Multi-Prompt Evaluation

Affiliations

Diagnostic Performance of ChatGPT-5 for Detecting Pediatric Pneumothorax on Chest Radiographs: A Multi-Prompt Evaluation

Chih-Hao Wang et al. Diagnostics (Basel). .

Abstract

Background/Objectives: Chest radiography is the primary first-line imaging tool for diagnosing pneumothorax in pediatric emergency settings. However, interpretation under clinical pressures such as high patient volume may lead to delayed or missed diagnosis, particularly for subtle cases. This study aimed to evaluate the diagnostic performance of ChatGPT-5, a multimodal large language model, in detecting and localizing pneumothorax on pediatric chest radiographs using multiple prompting strategies. Methods: In this retrospective study, 380 pediatric chest radiographs (190 pneumothorax cases and 190 matched controls) from a tertiary hospital were interpreted using ChatGPT-5 with three prompting strategies: instructional, role-based, and clinical-context. Performance metrics, including accuracy, sensitivity, specificity, and conditional side accuracy, were evaluated against an expert-adjudicated reference standard. Results: ChatGPT-5 achieved an overall accuracy of 0.77-0.79 and consistently high specificity (0.96-0.98) across all prompts, with stable reproducibility. However, sensitivity was limited (0.57-0.61) and substantially lower for small pneumothoraces (American College of Chest Physicians [ACCP]: 0.18-0.22; British Thoracic Society [BTS]: 0.41-0.46) than for large pneumothoraces (ACCP: 0.75-0.79; BTS: 0.85-0.88). The conditional side accuracy exceeded 0.96 when pneumothorax was correctly detected. No significant differences were observed among prompting strategies. Conclusions: ChatGPT-5 showed consistent but limited diagnostic performance for pediatric pneumothorax. Although the high specificity and reproducible detection of larger pneumothoraces reflect favorable performance characteristics, the unacceptably low sensitivity for subtle pneumothoraces precludes it from independent clinical interpretation and underscores the necessity of oversight by emergency clinicians.

Keywords: ChatGPT; chest radiograph; large language model; pediatric emergency medicine; pneumothorax; prompt engineering.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Representative example of a ChatGPT-5 question–answer interaction using Prompt A. A pediatric posteroanterior chest radiograph from a 15-year-old male patient diagnosed with pneumothorax, classified as a large pneumothorax according to expert-adjudicated reference standards based on the criteria of the American College of Chest Physicians (ACCP) and the British Thoracic Society (BTS), was uploaded to the ChatGPT interface. The model’s response exemplifies the typical output format. For the purpose of quantitative evaluation, only the presence and laterality of the pneumothorax were extracted from the model responses for analysis.
Figure 2
Figure 2
Confusion matrices illustrating ChatGPT-5’s performance in pneumothorax classification across three prompting strategies (A–C). Each cell displays the classification proportion (%) and the corresponding case count (n). While ChatGPT-5 accurately identified most normal radiographs, it misclassified 30–50% of pneumothorax cases as normal, irrespective of laterality. Among true-positive pneumothorax detections, fewer than 2.5% were assigned to the incorrect side. The clinical-context prompt (C) demonstrated a marginal improvement in detection performance compared with the instructional (A) and role-based (B) prompts.
Figure 3
Figure 3
Sensitivity of ChatGPT-5 for pneumothorax detection, stratified by pneumothorax size according to the American College of Chest Physicians (ACCP) (a) and British Thoracic Society (BTS) (b) classification systems. Across all prompting strategies (A–C), sensitivity was significantly lower for small pneumothoraces (ACCP: 0.18–0.22; BTS: 0.41–0.46) than for large pneumothoraces (ACCP: 0.75–0.79; BTS: 0.85–0.88; all comparisons p < 0.01). Error bars represent 95% confidence intervals.

References

    1. Jouneau S., Ricard J.-D., Seguin-Givelet A., Bigé N., Contou D., Desmettre T., Hugenschmitt D., Kepka S., Le Gloan K., Maitre B., et al. SPLF/SMFU/SRLF/SFAR/SFCTCV Guidelines for the management of patients with primary spontaneous pneumothorax. Ann. Intensive Care. 2023;13:88. doi: 10.1186/s13613-023-01181-2. - DOI - PMC - PubMed
    1. Larson P.A., Berland L.L., Griffith B., Kahn C.E., Jr., Liebscher L.A. Actionable findings and the role of IT support: Report of the ACR Actionable Reporting Work Group. J. Am. Coll. Radiol. 2014;11:552–558. doi: 10.1016/j.jacr.2013.12.016. - DOI - PubMed
    1. Ozkale Yavuz O., Ayaz E., Ozcan H.N., Oguz B., Haliloglu M. Spontaneous pneumothorax in children: A radiological perspective. Pediatr. Radiol. 2024;54:1864–1872. doi: 10.1007/s00247-024-06053-w. - DOI - PubMed
    1. Terboven T., Leonhard G., Wessel L., Viergutz T., Rudolph M., Schöler M., Weis M., Haubenreisser H. Chest wall thickness and depth to vital structures in paediatric patients—Implications for prehospital needle decompression of tension pneumothorax. Scand. J. Trauma. Resusc. Emerg. Med. 2019;27:45. doi: 10.1186/s13049-019-0623-5. - DOI - PMC - PubMed
    1. Hwang E.J., Park S., Jin K.-N., Kim J.I., Choi S.Y., Lee J.H., Goo J.M., Aum J., Yim J.-J., Cohen J.G., et al. Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA Netw. Open. 2019;2:e191095. doi: 10.1001/jamanetworkopen.2019.1095. - DOI - PMC - PubMed

LinkOut - more resources