Learn to optimize-a brief overview
- PMID: 39007005
- PMCID: PMC11242439
- DOI: 10.1093/nsr/nwae132
Learn to optimize-a brief overview
Abstract
Most optimization problems of practical significance are typically solved by highly configurable parameterized algorithms. To achieve the best performance on a problem instance, a trial-and-error configuration process is required, which is very costly and even prohibitive for problems that are already computationally intensive, e.g. optimization problems associated with machine learning tasks. In the past decades, many studies have been conducted to accelerate the tedious configuration process by learning from a set of training instances. This article refers to these studies as learn to optimize and reviews the progress achieved.
Keywords: automated algorithm configuration; data-driven algorithm design; machine learning; optimization.
© The Author(s) 2024 Published by Oxford University Press on behalf of China Science Publishing & Media Ltd.
Figures
References
-
- Boyd SP, Vandenberghe L. Convex Optimization. Cambridge: Cambridge University Press, 2004.
-
- Bottou L. Stochastic gradient descent tricks. In: Montavon G, Orr GB Müller KR (eds.) Neural Networks: Tricks of the Trade. Berlin: Springer, 2012, 421–36.10.1007/978-3-642-35289-8_25 - DOI
-
- Gendreau M, Potvin JY. Handbook of Metaheuristics, Vol. 2. Berlin: Springer, 2010.
-
- Zhou Z, Yu Y, Qian C. Evolutionary Learning: Advances in Theories and Algorithms, Berlin: Springer, 2019.
-
- Jacobs RA. Increased rates of convergence through learning rate adaptation. Neural Netw 1988; 1: 295–307.10.1016/0893-6080(88)90003-2 - DOI
Publication types
LinkOut - more resources
Full Text Sources
Miscellaneous