Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Mar 1;25(3):321-330.
doi: 10.1093/jamia/ocx131.

Hierarchical attention networks for information extraction from cancer pathology reports

Affiliations

Hierarchical attention networks for information extraction from cancer pathology reports

Shang Gao et al. J Am Med Inform Assoc. .

Abstract

Objective: We explored how a deep learning (DL) approach based on hierarchical attention networks (HANs) can improve model performance for multiple information extraction tasks from unstructured cancer pathology reports compared to conventional methods that do not sufficiently capture syntactic and semantic contexts from free-text documents.

Materials and methods: Data for our analyses were obtained from 942 deidentified pathology reports collected by the National Cancer Institute Surveillance, Epidemiology, and End Results program. The HAN was implemented for 2 information extraction tasks: (1) primary site, matched to 12 International Classification of Diseases for Oncology topography codes (7 breast, 5 lung primary sites), and (2) histological grade classification, matched to G1-G4. Model performance metrics were compared to conventional machine learning (ML) approaches including naive Bayes, logistic regression, support vector machine, random forest, and extreme gradient boosting, and other DL models, including a recurrent neural network (RNN), a recurrent neural network with attention (RNN w/A), and a convolutional neural network.

Results: Our results demonstrate that for both information tasks, HAN performed significantly better compared to the conventional ML and DL techniques. In particular, across the 2 tasks, the mean micro and macro F-scores for the HAN with pretraining were (0.852,0.708), compared to naive Bayes (0.518, 0.213), logistic regression (0.682, 0.453), support vector machine (0.634, 0.434), random forest (0.698, 0.508), extreme gradient boosting (0.696, 0.522), RNN (0.505, 0.301), RNN w/A (0.637, 0.471), and convolutional neural network (0.714, 0.460).

Conclusions: HAN-based DL models show promise in information abstraction tasks within unstructured clinical pathology reports.

Keywords: attention networks; classification; clinical pathology reports; information retrieval; recurrent neural nets.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Architecture for our hierarchical attention network (HAN). produces line embeddings by processing the word embeddings in each line. The HAN then produces a document embedding by processing the line embeddings in the document. The final document embedding can then be used for classification or pretraining purposes.
Figure 2.
Figure 2.
The HAN trains and validates accuracies with and without pretraining during the first 10 epochs for (A) the primary site classification task and (B) the histological grade classification task.
Figure 3.
Figure 3.
HAN annotations on sample pathology report for each classification task. The most important words in each line are highlighted in blue, with darker blue indicating higher importance. The most important lines in the report are highlighted in red, with darker red indicating higher importance. For each task, the HAN can successfully locate the specific line(s) within a document and text within the line(s) that identify the primary site (eg, lower lobe) or histological grade (eg, poorly differentiated). The RNN structure utilized by the HAN allows it to take into account word and line context to better locate the correct text segments.
Figure 4.
Figure 4.
HAN document embeddings reduced to 2 dimensions via principal component analysis for (A) primary site train reports, (B) histological grade train reports, (C) primary site test reports, and (D) histological grade test reports.
Figure 5.
Figure 5.
Confusion matrix for (A) HAN with pretraining on the primary site classification task and (B) HAN without pretraining on the histological grade classification task.

References

    1. Lowy D,Collins F. Aiming high—changing the trajectory for cancer.New Engl J Med. 2016;37420:1901–04. - PMC - PubMed
    1. National Cancer Institute.Overview of the SEER Program.2017https://seer.cancer.gov/about/overview.html. Accessed October 10, 2017.
    1. Kumar A,Irsoy O,Ondruska P,et al.Ask me anything: dynamic memory networks for natural language processing. In:Proc Int Conf Mach Learn. 2016:1378–87.
    1. Kim Y. Convolutional neural networks for sentence classification.arXiv preprint arXiv:14085882. 2014.
    1. Lipton Z,Berkowitz J,Elkan C. A critical review of recurrent neural networks for sequence learning.arXiv preprint arXiv:150600019. 2015.