Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 May 2;23(Suppl 3):158.
doi: 10.1186/s12859-022-04681-3.

Exploration of chemical space with partial labeled noisy student self-training and self-supervised graph embedding

Affiliations

Exploration of chemical space with partial labeled noisy student self-training and self-supervised graph embedding

Yang Liu et al. BMC Bioinformatics. .

Abstract

Background: Drug discovery is time-consuming and costly. Machine learning, especially deep learning, shows great potential in quantitative structure-activity relationship (QSAR) modeling to accelerate drug discovery process and reduce its cost. A big challenge in developing robust and generalizable deep learning models for QSAR is the lack of a large amount of data with high-quality and balanced labels. To address this challenge, we developed a self-training method, Partially LAbeled Noisy Student (PLANS), and a novel self-supervised graph embedding, Graph-Isomorphism-Network Fingerprint (GINFP), for chemical compounds representations with substructure information using unlabeled data. The representations can be used for predicting chemical properties such as binding affinity, toxicity, and others. PLANS-GINFP allows us to exploit millions of unlabeled chemical compounds as well as labeled and partially labeled pharmacological data to improve the generalizability of neural network models.

Results: We evaluated the performance of PLANS-GINFP for predicting Cytochrome P450 (CYP450) binding activity in a CYP450 dataset and chemical toxicity in the Tox21 dataset. The extensive benchmark studies demonstrated that PLANS-GINFP could significantly improve the performance in both cases by a large margin. Both PLANS-based self-training and GINFP-based self-supervised learning contribute to the performance improvement.

Conclusion: To better exploit chemical structures as an input for machine learning algorithms, we proposed a self-supervised graph neural network-based embedding method that can encode substructure information. Furthermore, we developed a model agnostic self-training method, PLANS, that can be applied to any deep learning architectures to improve prediction accuracies. PLANS provided a way to better utilize partially labeled and unlabeled data. Comprehensive benchmark studies demonstrated their potentials in predicting drug metabolism and toxicity profiles using sparse, noisy, and imbalanced data. PLANS-GINFP could serve as a general solution to improve the predictive modeling for QSAR modeling.

Keywords: Artificial intelligence; Chemical embedding; Deep neural network; Drug discovery; Drug metabolism; Drug toxicity; Drug-target interaction; Graph neural network; Self-supervised learning; Semi-supervised learning.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Fig. 1
Fig. 1
GINFP training loss and ECFP constructions. Every bit of ECFP and the predicted ECFP constructed from GINFP are shown as bars below and above the x-axis, respectively. The values of predicted ECFP are after sigmoid activation
Fig. 2
Fig. 2
Sample distribution before and after data balancing. Blue bars represent the original samples. Orange bars represent the samples added from the ChEMBL24 dataset
Fig. 3
Fig. 3
Analyzing the training results with or without the data balancing. Blue bars represent the samples that were correctly predicted. Orange bars represent the samples that the model failed to recall. Red bars represent samples that were incorrectly classified into the class by the model. The subpanels are the zoom-in of the classes without the all-negative class
Fig. 4
Fig. 4
Overview of the workflow
Fig. 5
Fig. 5
GIN model architecture and GINFP. Sum is used for node-level pooling and mean is used for graph-level pooling
Fig. 6
Fig. 6
Architectures of the MLP models
Fig. 7
Fig. 7
Statistics of chemical molecule graphs for datasets used in our experiments. The upper and lower parts of the first two panels for the ChEMBL dataset have different y-axis scales because of the large number of nodes/edges distribute in several bins

Similar articles

Cited by

References

    1. Chen H, Engkvist O, Wang Y, Olivecrona M, Blaschke T. The rise of deep learning in drug discovery. Drug Discov Today. 2018;23:1241–1250. doi: 10.1016/j.drudis.2018.01.039. - DOI - PubMed
    1. Rumelhart DE, McClelland JL, PDP Research Group C, editors. Parallel distributed processing: explorations in the microstructure of cognition, vol 1, foundations. Cambridge: MIT Press; 1986.
    1. Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. 2016.
    1. Kingma DP, Welling M. Auto-encoding variational bayes.
    1. Kipf TN, Welling M. Variational graph auto-encoders. 2016.

LinkOut - more resources