Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan 9;24(1):9.
doi: 10.1186/s12859-022-05132-9.

Ensemble feature selection with data-driven thresholding for Alzheimer's disease biomarker discovery

Affiliations

Ensemble feature selection with data-driven thresholding for Alzheimer's disease biomarker discovery

Annette Spooner et al. BMC Bioinformatics. .

Abstract

Background: Feature selection is often used to identify the important features in a dataset but can produce unstable results when applied to high-dimensional data. The stability of feature selection can be improved with the use of feature selection ensembles, which aggregate the results of multiple base feature selectors. However, a threshold must be applied to the final aggregated feature set to separate the relevant features from the redundant ones. A fixed threshold, which is typically used, offers no guarantee that the final set of selected features contains only relevant features. This work examines a selection of data-driven thresholds to automatically identify the relevant features in an ensemble feature selector and evaluates their predictive accuracy and stability. Ensemble feature selection with data-driven thresholding is applied to two real-world studies of Alzheimer's disease. Alzheimer's disease is a progressive neurodegenerative disease with no known cure, that begins at least 2-3 decades before overt symptoms appear, presenting an opportunity for researchers to identify early biomarkers that might identify patients at risk of developing Alzheimer's disease.

Results: The ensemble feature selectors, combined with data-driven thresholds, produced more stable results, on the whole, than the equivalent individual feature selectors, showing an improvement in stability of up to 34%. The most successful data-driven thresholds were the robust rank aggregation threshold and the threshold algorithm threshold from the field of information retrieval. The features identified by applying these methods to datasets from Alzheimer's disease studies reflect current findings in the AD literature.

Conclusions: Data-driven thresholds applied to ensemble feature selectors provide more stable, and therefore more reproducible, selections of features than individual feature selectors, without loss of performance. The use of a data-driven threshold eliminates the need to choose a fixed threshold a-priori and can select a more meaningful set of features. A reliable and compact set of features can produce more interpretable models by identifying the factors that are important in understanding a disease.

Keywords: Alzheimer's disease; Data-driven thresholding; Ensemble feature selection; Stability.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
A homogeneous feature selection ensemble. Sample1, Sample 2 … Sample n are randomly sampled subsets of the training data. The same feature selector is applied separately to each sample, generating n sets of selected features. An aggregator is applied to combine these feature sets into a single set, a threshold is applied and the resulting feature set is used as input to a survival model to assess its accuracy
Fig. 2
Fig. 2
Example plot of kernel density estimate for one-dimensional clustering. The green dots show the local maxima, which are the cluster centres. The red dots show the local minima, which are the cluster boundaries. The maximum of the local maxima is the cluster centre for the irrelevant features
Fig. 3
Fig. 3
Experimental results from the MAS dataset. Each plot shows performance vs stability of one feature selector. The different shapes represent different aggregators, with a star shape representing the individual form, where the model is run only once and there is no aggregation of results. The different colours represent the different thresholds applied to the models. The abbreviations RRA05, RRA10, RRA15, RRA20 and RRA25 refer to the RRA method tested with p-values of 0.05, 0.1, 0.15, 0.2, 0.25 respectively
Fig. 4
Fig. 4
Experimental results from the ADNI dataset. Each plot shows performance vs stability of one feature selector. The different shapes represent different aggregators, with a star shape representing the individual form, where the model is run only once and there is no aggregation of results. The different colours represent the different thresholds applied to the models. The abbreviations RRA05, RRA10, RRA15, RRA20 and RRA25 refer to the RRA method tested with p-values of 0.05, 0.1, 0.15, 0.2, 0.25 respectively
Fig. 5
Fig. 5
Average Euclidean distance from the origin for each threshold for the MAS and ADNI datasets

References

    1. Guyon I, Elisseeff A, De AM. An introduction to variable and feature selection. J Mach Learn Res. 2003;3:1157–1182.
    1. Awada W, Khoshgoftaar TM, Dittman D, Wald R, Napolitano A. A review of the stability of feature selection techniques for bioinformatics data. In International Conference on Information Reuse & Integration (IRI) 2012;356–63.
    1. Kalousis A, Prados J, Hilario M. Stability of feature selection algorithms: a study on high-dimensional spaces. Knowl Inf Syst. 2007;12(1):95–116. doi: 10.1007/s10115-006-0040-8. - DOI
    1. Yu L, Ding C, Loscalzo S. Stable feature selection via dense feature groups. Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining. 2008;803–11.
    1. Saeys Y, Inza I, Larranaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19):2507–2517. doi: 10.1093/bioinformatics/btm344. - DOI - PubMed