A General Iterative Clustering Algorithm
- PMID: 36061078
- PMCID: PMC9438941
- DOI: 10.1002/sam.11573
A General Iterative Clustering Algorithm
Abstract
The quality of a cluster analysis of unlabeled units depends on the quality of the between units dissimilarity measures. Data dependent dissimilarity is more objective than data independent geometric measures such as Euclidean distance. As suggested by Breiman, many data driven approaches are based on decision tree ensembles, such as a random forest (RF), that produce a proximity matrix that can easily be transformed into a dissimilarity matrix. A RF can be obtained using labels that distinguish units with real data from units with synthetic data. The resulting dissimilarity matrix is input to a clustering program and units are assigned labels corresponding to cluster membership. We introduce a General Iterative Cluster (GIC) algorithm that improves the proximity matrix and clusters of the base RF. The cluster labels are used to grow a new RF yielding an updated proximity matrix which is entered into the clustering program. The process is repeated until convergence. The same procedure can be used with many base procedures such as the Extremely Randomized Tree ensemble. We evaluate the performance of the GIC algorithm using benchmark and simulated data sets. The properties measured by the Silhouette Score are substantially superior to the base clustering algorithm. The GIC package has been released in R: https://cran.r-project.org/web/packages/GIC/index.html.
Keywords: Clustering; Extremely Randomized Tree; Extremely randomized tree; Proximity; Random Forest; iterative RF clustering.
Conflict of interest statement
CONFLICT OF INTEREST The authors have no conflicts to disclose.
Figures





References
-
- Amit Y. and Geman D., Randomized inquiries about shape: An application to handwritten digit recognition, Chicago Univ IL Dept of Statistics, 1994.
-
- Aryal S., Ting K. M., Washio T., and Haffari G., A comparative study of data‐dependent approaches without learning in measuring similarities of data objects, Data Min. Knowl. Disc. 34 (2020), no. 1, 124–162.
-
- Bicego M., “K‐random forests: A k‐means style algorithm for random forest clustering,” 2019 International Joint Conference on Neural Networks (IJCNN), IEEE, 2019, pp. 1–8.
-
- Bicego M. and Escolano F., “On learning random forests for random forest‐clustering,” 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 3451–3458.
Grants and funding
LinkOut - more resources
Full Text Sources
Research Materials