Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2002 Sep 18;94(18):1373-80.
doi: 10.1093/jnci/94.18.1373.

Screening mammograms by community radiologists: variability in false-positive rates

Affiliations

Screening mammograms by community radiologists: variability in false-positive rates

Joann G Elmore et al. J Natl Cancer Inst. .

Abstract

Background: Previous studies have shown that the agreement among radiologists interpreting a test set of mammograms is relatively low. However, data available from real-world settings are sparse. We studied mammographic examination interpretations by radiologists practicing in a community setting and evaluated whether the variability in false-positive rates could be explained by patient, radiologist, and/or testing characteristics.

Methods: We used medical records on randomly selected women aged 40-69 years who had had at least one screening mammographic examination in a community setting between January 1, 1985, and June 30, 1993. Twenty-four radiologists interpreted 8734 screening mammograms from 2169 women. Hierarchical logistic regression models were used to examine the impact of patient, radiologist, and testing characteristics. All statistical tests were two-sided.

Results: Radiologists varied widely in mammographic examination interpretations, with a mass noted in 0%-7.9%, calcification in 0%-21.3%, and fibrocystic changes in 1.6%-27.8% of mammograms read. False-positive rates ranged from 2.6% to 15.9%. Younger and more recently trained radiologists had higher false-positive rates. Adjustment for patient, radiologist, and testing characteristics narrowed the range of false-positive rates to 3.5%-7.9%. If a woman went to two randomly selected radiologists, her odds, after adjustment, of having a false-positive reading would be 1.5 times greater for the radiologist at higher risk of a false-positive reading, compared with the radiologist at lowest risk (95% highest posterior density interval [similar to a confidence interval] = 1.17 to 2.08).

Conclusion: Community radiologists varied widely in their false-positive rates in screening mammograms; this variability range was reduced by half, but not eliminated, after statistical adjustment for patient, radiologist, and testing characteristics. These characteristics need to be considered when evaluating false-positive rates in community mammographic examination screening.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Observed false-positive rates for 24 radiologists reading 8734 mammograms. Each column represents a single radiologist. The number of mammograms interpreted by a given radiologist is given at the base of the column and the false-positive rate is plotted on the y-axis, with the exact number describing the false-positive rate for an individual radiologist given at the top of the respective column.
Fig. 2
Fig. 2
Results of statistical modeling for observed (unadjusted, line A) and adjusted (lines B, C, and D) false-positive rates for 24 radiologists. The details of the adjustments are given below. The right side shows the odds ratios (ORs) with 95% highest posterior density intervals (HPDs) (similar to classical 95% confidence intervals). Line A shows observed unadjusted false-positive rates and summary OR values. Line B shows false-positive rates and summary ORs after adjusting for correlation between multiple mammograms within the same woman and by the same radiologist. Line C shows ORs after adjustments given at line B, plus adjustment for patient characteristics and testing characteristics. Line D shows all adjustments at line C plus adjustment for radiologists' characteristics.

Comment in

  • Much ado about mammography variability.
    Kessler LG, Andersen MR, Etzioni R. Kessler LG, et al. J Natl Cancer Inst. 2002 Sep 18;94(18):1346-7. doi: 10.1093/jnci/94.18.1346. J Natl Cancer Inst. 2002. PMID: 12237274 No abstract available.

References

    1. Houn F, Elliott ML, McCrohan JL. The Mammography Quality Standards Act of 1992. History and philosophy. Radiol Clin North Am. 1995;33:1059–65. - PubMed
    1. Ciccone G, Vineis P, Frigerio A, Segnan N. Inter-observer and intraobserver variability of mammographic examination interpretation: a field study. Eur J Cancer. 1992;28A:1054–8. - PubMed
    1. Vineis P, Sinistrero G, Temporelli A, Azzoni L, Bigo A, Burke P, et al. Inter-observer variability in the interpretation of mammograms. Tumori. 1988;74:275–9. - PubMed
    1. Elmore J, Wells C, Lee C, Howard D, Feinstein A. Variability in radiologists' interpretations of mammograms. N Engl J Med. 1994;331:1493–9. - PubMed
    1. Kerlikowske K, Grady D, Barclay J, Frankel SD, Ominsky SH, Sickles EA, et al. Variability and accuracy in mammographic interpretation using the American College of Radiology Breast Imaging Reporting and Data System. J Natl Cancer Inst. 1998;90:1801–9. - PubMed

Publication types