Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Apr 2;11(8):nwae132.
doi: 10.1093/nsr/nwae132. eCollection 2024 Aug.

Learn to optimize-a brief overview

Affiliations
Review

Learn to optimize-a brief overview

Ke Tang et al. Natl Sci Rev. .

Abstract

Most optimization problems of practical significance are typically solved by highly configurable parameterized algorithms. To achieve the best performance on a problem instance, a trial-and-error configuration process is required, which is very costly and even prohibitive for problems that are already computationally intensive, e.g. optimization problems associated with machine learning tasks. In the past decades, many studies have been conducted to accelerate the tedious configuration process by learning from a set of training instances. This article refers to these studies as learn to optimize and reviews the progress achieved.

Keywords: automated algorithm configuration; data-driven algorithm design; machine learning; optimization.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Illustration of the general idea of L2O.
Figure 2
Figure 2
Illustration of three training methods of L2O: (a) training performance prediction models, (b) training a single solver and (c) training a portfolio of solvers.

Similar articles

Cited by

References

    1. Boyd SP, Vandenberghe L. Convex Optimization. Cambridge: Cambridge University Press, 2004.
    1. Bottou L. Stochastic gradient descent tricks. In: Montavon G, Orr GB Müller KR (eds.) Neural Networks: Tricks of the Trade. Berlin: Springer, 2012, 421–36.10.1007/978-3-642-35289-8_25 - DOI
    1. Gendreau M, Potvin JY. Handbook of Metaheuristics, Vol. 2. Berlin: Springer, 2010.
    1. Zhou Z, Yu Y, Qian C. Evolutionary Learning: Advances in Theories and Algorithms, Berlin: Springer, 2019.
    1. Jacobs RA. Increased rates of convergence through learning rate adaptation. Neural Netw 1988; 1: 295–307.10.1016/0893-6080(88)90003-2 - DOI

LinkOut - more resources