Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Jun 25;4(1):vbae093.
doi: 10.1093/bioadv/vbae093. eCollection 2024.

Optimal linear ensemble of binary classifiers

Affiliations

Optimal linear ensemble of binary classifiers

Mehmet Eren Ahsen et al. Bioinform Adv. .

Abstract

Motivation: The integration of vast, complex biological data with computational models offers profound insights and predictive accuracy. Yet, such models face challenges: poor generalization and limited labeled data.

Results: To overcome these difficulties in binary classification tasks, we developed the Method for Optimal Classification by Aggregation (MOCA) algorithm, which addresses the problem of generalization by virtue of being an ensemble learning method and can be used in problems with limited or no labeled data. We developed both an unsupervised (uMOCA) and a supervised (sMOCA) variant of MOCA. For uMOCA, we show how to infer the MOCA weights in an unsupervised way, which are optimal under the assumption of class-conditioned independent classifier predictions. When it is possible to use labels, sMOCA uses empirically computed MOCA weights. We demonstrate the performance of uMOCA and sMOCA using simulated data as well as actual data previously used in Dialogue on Reverse Engineering and Methods (DREAM) challenges. We also propose an application of sMOCA for transfer learning where we use pre-trained computational models from a domain where labeled data are abundant and apply them to a different domain with less abundant labeled data.

Availability and implementation: GitHub repository, https://github.com/robert-vogel/moca.

PubMed Disclaimer

Conflict of interest statement

No competing interest is declared.

Figures

Figure 1.
Figure 1.
The MOCA strategy. The MOCA strategy is an optimal aggregation of rank-ordered predictions by pre-trained binary classifiers on new, never-before-seen data. It has two versions that can be applied in either the absence or presence of labeled data. (A) When labels are not present, the uMOCA algorithm infers the optimal weights without using any labeled examples. (B) When labels are available, the sMOCA algorithm estimates the optimal weights and greedily selects the optimal combination of base classifiers. sMOCA is noteworthy on account that its training data requirements can be much less than the pre-trained models it is aggregating.
Figure 2.
Figure 2.
The signal-to-noise score. Simulated rank predictions of 500 samples in which 200 samples (prevalence ρ = 0.4) are from the positive class (y =1). The simulation consists of assuming unit-variance Gaussian class-conditioned score distributions with the differences between the score means for classes 1 and 0 chosen such that the AUC= Φ(s1¯s0¯2), where Φ is the standard normal cumulative distribution and s1¯ and s1¯ are the mean scores for classes 1 and 0, respectively. Estimates of the probability of sample rank given the class label, P(R=r|Y=y), were computed by averaging the true class labels at a given rank over 1000 replicate simulation experiments. (A) The AUC is related to the signal-to-noise score by a sigmoidal function. (B–E) Plots of the conditional distribution for methods with an AUC of (B) 0.9, (C) 0.6, (D) 0.5, and (E) 0.2.
Figure 3.
Figure 3.
The unsupervised MOCA algorithm. MOCA was applied to (A, E, I) simulation data where base classifiers predictions are conditionally independent, (B, F, J) predictions by teams participating in the DREAM2 BCL6 Transcription factor target prediction challenge, (C, G, K) simulation data where base classifier predictions are conditionally dependent, and (D, H, L) predictions by teams participating in the DREAM 9.5 Prostate Cancer Prediction Challenge. For each dataset, we demonstrate MOCA’s ability to infer MOCA weight, wi, i=1,2,,M, measure the AUC in relation to the wisdom of crowd ensemble (WOC), and the best individual base classifier (Best_BC), and measure the empirical conditional correlation matrix C. The error bars represent SEM computed from 5-fold cross-validation.
Figure 4.
Figure 4.
Transfer learning with sMOCA was applied to automated melanoma classification using 2750 images from the ISIC-archive and five deep learning models from TensorFlow Hub: (i) inception_v3, (ii) mobile_net_v2_035_224, (iii) resetnet_v2, (iv) pnasnet_large, and (v) nasnet_mobile. (A) Each deep learning model was pre-trained on the ImageNet 2012 (ILSVRC-2012-CLS) dataset. To apply to images for melanoma prediction, we resized images to match the input layer of the respective network and then used the output layer values for each image as a feature vector for binary classification by either L1 regularized Logistic Regression or Gaussian Naive Bayes. We then assessed the performance of each deep learning model paired with a binary classifier for a total of 10 independent methods by 10 independent rounds of 5-fold cross-validation. In each fold, we split the training data into two groups, the first for training the classification layer and the second for training sMOCA. (B) The bar chart shows the average performance as measured by AUC, BA, and F1 score ± SEM for sMOCA, WOC, and the independent methods. sMOCA outperformed, with respect to each performance measure, all other methods with P <.001.

References

    1. Abadi M, Barham P, Chen J. et al. Tensorflow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2016, 265–83.
    1. Agarwal S, Graepel T, Herbrich R. et al. Generalization bounds for the area under the roc curve. J Mach Learn Res 2005;6:393–425.
    1. Ahsen ME, Vogel RM, Stolovitzky GA.. Unsupervised evaluation and weighted aggregation of ranked classification predictions. J Mach Learn Res 2019;20:1–40.
    1. Anders S, Huber W.. Differential Expression of RNA-seq Data at the Gene Level – The DESeq Package. Heidelberg, Germany: European Molecular Biology Laboratory (EMBL; ), 2012.
    1. Bansal M, Yang J, Karan C. et al. A community computational challenge to predict the activity of pairs of compounds. Nat Biotechnol 2014;32:1213–22. - PMC - PubMed

LinkOut - more resources