Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 May;267(2):359-67.
doi: 10.1148/radiol.12121216. Epub 2013 Jan 7.

Diagnostic mammography: identifying minimally acceptable interpretive performance criteria

Affiliations

Diagnostic mammography: identifying minimally acceptable interpretive performance criteria

Patricia A Carney et al. Radiology. 2013 May.

Abstract

Purpose: To develop criteria to identify thresholds for the minimally acceptable performance of physicians interpreting diagnostic mammography studies.

Materials and methods: In an institutional review board-approved HIPAA-compliant study, an Angoff approach was used to set criteria for identifying minimally acceptable interpretive performance for both workup after abnormal screening examinations and workup of a breast lump. Normative data from the Breast Cancer Surveillance Consortium (BCSC) was used to help the expert radiologist identify the impact of cut points. Simulations, also using data from the BCSC, were used to estimate the expected clinical impact from the recommended performance thresholds.

Results: Final cut points for workup of abnormal screening examinations were as follows: sensitivity, less than 80%; specificity, less than 80% or greater than 95%; abnormal interpretation rate, less than 8% or greater than 25%; positive predictive value (PPV) of biopsy recommendation (PPV2), less than 15% or greater than 40%; PPV of biopsy performed (PPV3), less than 20% or greater than 45%; and cancer diagnosis rate, less than 20 per 1000 interpretations. Final cut points for workup of a breast lump were as follows: sensitivity, less than 85%; specificity, less than 83% or greater than 95%; abnormal interpretation rate, less than 10% or greater than 25%; PPV2, less than 25% or greater than 50%; PPV3, less than 30% or greater than 55%; and cancer diagnosis rate, less than 40 per 1000 interpretations. If underperforming physicians moved into the acceptable range after remedial training, the expected result would be (a) diagnosis of an additional 86 cancers per 100,000 women undergoing workup after screening examinations, with a reduction in the number of false-positive examinations by 1067 per 100,000 women undergoing this workup, and (b) diagnosis of an additional 335 cancers per 100,000 women undergoing workup of a breast lump, with a reduction in the number of false-positive examinations by 634 per 100,000 women undergoing this workup.

Conclusion: Interpreting physicians who fall outside one or more of the identified cut points should be reviewed in the context of an overall assessment of all their performance measures and their specific practice setting to determine if remedial training is indicated.

PubMed Disclaimer

References

    1. American College of Radiology ACR practice guideline for the performance of screening and diagnostic mammography. Reston, Va: American College of Radiology, 2011
    1. U.S. Food and Drug Administration Radiation emitting products: Mammography Quality Standards Act. http://www.fda.gov/RadiationEmittingProducts/MammographyQualityStandards.... Accessed May 31, 2012
    1. Rosenberg RD, Yankaskas BC, Abraham LA, et al. Performance benchmarks for screening mammography. Radiology 2006;241(1):55–66 - PubMed
    1. Sickles EA, Miglioretti DL, Ballard-Barbash R, et al. Performance benchmarks for diagnostic mammography. Radiology 2005;235(3):775–790 - PubMed
    1. Leung JW, Margolin FR, Dee KE, Jacobs RP, Denny SR, Schrumpf JD. Performance parameters for screening and diagnostic mammography in a community practice: are there differences between specialists and general radiologists? AJR Am J Roentgenol 2007;188(1):236–241 - PubMed

Publication types