Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2002 Aug;100(2):277-80.
doi: 10.1016/s0029-7844(02)02058-6.

Discrepancy in the interpretation of cervical histology by gynecologic pathologists

Affiliations
Comparative Study

Discrepancy in the interpretation of cervical histology by gynecologic pathologists

Mary F Parker et al. Obstet Gynecol. 2002 Aug.

Abstract

Objective: To determine if subspecialty review of cervical histology improves diagnostic consensus of cervical intraepithelial neoplasia (CIN).

Methods: After routine histologic assessment within the hospital pathology department, 119 colposcopic cervical biopsies were interpreted by two subspecialty-trained gynecologic pathologists (GYN I and GYN II) blinded to each other's interpretations and to the interpretations of the hospital general pathologists (GEN). Biopsies were classified as normal (including cervicitis), low grade (LG, including CIN I and human papillomavirus changes), and high grade (HG, including CIN II/III). The interobserver agreement rates between GEN and GYN I, between GEN and GYN II, and between GYN I and GYN II were described using the kappa statistic. The proportions of biopsies assigned to each biopsy class were compared using McNemar test.

Results: Interobserver agreement rates between GEN and GYN I were moderate for normal (kappa = 0.53) and LG (kappa = 0.46) and excellent for HG (kappa = 0.76). There were no significant differences in the classifications between GEN and GYN I. Interobserver agreement rates between GEN and GYN II were moderate for normal (kappa = 0.50) and LG (kappa = 0.44) and excellent for HG (kappa = 0.84). Also, GYN II was significantly more likely to classify biopsies as normal (P <.001) and less likely to classify biopsies as LG (P <.001). The interobserver agreement rates between GYN I and GYN II were moderate for normal (kappa = 0.61) and LG (kappa = 0.41) and excellent for HG (kappa = 0.84). Also, GYN II was significantly more likely to classify biopsies as normal (P <.001) and less likely to classify biopsies as LG (P =.01).

Conclusion: Interobserver agreement between two gynecologic pathologists was no better than that observed between general and gynecologic pathologists. Subspecialty review of cervical histology does not enhance diagnostic consensus of CIN.

PubMed Disclaimer

Publication types