Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jul 14;25(7):1065.
doi: 10.3390/e25071065.

A Preprocessing Manifold Learning Strategy Based on t-Distributed Stochastic Neighbor Embedding

Affiliations

A Preprocessing Manifold Learning Strategy Based on t-Distributed Stochastic Neighbor Embedding

Sha Shi et al. Entropy (Basel). .

Abstract

In machine learning and data analysis, dimensionality reduction and high-dimensional data visualization can be accomplished by manifold learning using a t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm. We significantly improve this manifold learning scheme by introducing a preprocessing strategy for the t-SNE algorithm. In our preprocessing, we exploit Laplacian eigenmaps to reduce the high-dimensional data first, which can aggregate each data cluster and reduce the Kullback-Leibler divergence (KLD) remarkably. Moreover, the k-nearest-neighbor (KNN) algorithm is also involved in our preprocessing to enhance the visualization performance and reduce the computation and space complexity. We compare the performance of our strategy with that of the standard t-SNE on the MNIST dataset. The experiment results show that our strategy exhibits a stronger ability to separate different clusters as well as keep data of the same kind much closer to each other. Moreover, the KLD can be reduced by about 30% at the cost of increasing the complexity in terms of runtime by only 1-2%.

Keywords: dimensionality reducing; k-nearest neighbor; manifold learning; t-SNE.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflicts of interest.

Figures

Figure 1
Figure 1
Visualization of 5000 data randomly selected from MNIST by using t-SNE, the iteration number is 500.
Figure 2
Figure 2
Visualization of 5000 data randomly selected from MNIST by using t-SNE with LE as the preprocessing strategy. Here we take k0=40 and N0=80. The iteration number of t-SNE is also 500.
Figure 3
Figure 3
Visualization of 5000 data selected from MNIST randomly with our preprocessing strategy as shown in Algorithm 1. Here, we take k0=k1=40, N0=80 and the iteration number is 500. During the first 100 iteration of gradient descent, early exaggeration is exploited.
Figure 4
Figure 4
Visualization of 720 data from 10 objects randomly selected from Coil−100 by using t−SNE.
Figure 5
Figure 5
Visualization of 720 data from 10 objects randomly selected from Coil−100 with our preprocessing strategy.
Figure 6
Figure 6
Visualization of 5000 data randomly selected from Fashion−MNIST by using t−SNE.
Figure 7
Figure 7
Visualization of 5000 data randomly selected from Fashion−MNIST with our preprocessing strategy.
Figure 8
Figure 8
The gradient descent process of the SNE, the standard t-SNE and our preprocessing strategy.
Figure 9
Figure 9
The gradient descent process of our preprocessing strategy when using different parameters α.
Figure 10
Figure 10
The gradient descent process of our preprocessing strategy when using different parameters k, where we let k = k0 = k1.

Similar articles

Cited by

References

    1. Keogh E., Mueen A. Encyclopedia of Machine Learning. Springer; Boston, MA, USA: 2011. Curse of dimensionality; pp. 257–258.
    1. Anowar F., Sadaoui S., Selim B. Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE) Comput. Sci. Rev. 2021;40:100378. doi: 10.1016/j.cosrev.2021.100378. - DOI
    1. Cheridito P., Jentzen A., Rossmannek F. Efficient Approximation of High-Dimensional Functions with Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 2022;33:3079–3093. doi: 10.1109/TNNLS.2021.3049719. - DOI - PubMed
    1. An P., Wang Z., Zhang C. Ensemble unsupervised autoencoders and Gaussian mixture model for cyberattack detection. Inf. Process. Manag. 2022;59:102844. doi: 10.1016/j.ipm.2021.102844. - DOI
    1. Gorban A.N., Kégl B., Wunsch D.C., Zinovyev A.Y., editors. Principal Manifolds for Data Visualization and Dimension Reduction. Springer; Boston, MA, USA: 2008.

LinkOut - more resources