Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan 18;25(2):188.
doi: 10.3390/e25020188.

Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models

Affiliations

Maximum Entropy Exploration in Contextual Bandits with Neural Networks and Energy Based Models

Adam Elwood et al. Entropy (Basel). .

Abstract

Contextual bandits can solve a huge range of real-world problems. However, current popular algorithms to solve them either rely on linear models or unreliable uncertainty estimation in non-linear models, which are required to deal with the exploration-exploitation trade-off. Inspired by theories of human cognition, we introduce novel techniques that use maximum entropy exploration, relying on neural networks to find optimal policies in settings with both continuous and discrete action spaces. We present two classes of models, one with neural networks as reward estimators, and the other with energy based models, which model the probability of obtaining an optimal reward given an action. We evaluate the performance of these models in static and dynamic contextual bandit simulation environments. We show that both techniques outperform standard baseline algorithms, such as NN HMC, NN Discrete, Upper Confidence Bound, and Thompson Sampling, where energy based models have the best overall performance. This provides practitioners with new techniques that perform well in static and dynamic settings, and are particularly well suited to non-linear scenarios with continuous action spaces.

Keywords: Thompson Sampling; energy based models; machine learning; multi-armed bandit.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Example of the implicit regression architecture: (a): linear case, (b) quadratic case.
Figure 2
Figure 2
The evolution of an energy function as the training sample size is increased from 100 to 100,000 in an environment with two distinct context categories (labeled 0, blue, and 1, orange) with optimal actions around 3 and 9, respectively.
Figure 3
Figure 3
Example of a simulation environment used for testing the algorithms with two contexts. The probability of receiving a reward given an action depends on the context as shown on the left. The linearly and non-linearly separable contexts are shown in the centre and on the right, respectively.

References

    1. Silver D., Huang A., Maddison C.J., Guez A., Sifre L., van den Driessche G., Schrittwieser J., Antonoglou I., Panneershelvam V., Lanctot M., et al. Mastering the game of Go with deep neural networks and tree search. Nature. 2016;529:484–489. doi: 10.1038/nature16961. - DOI - PubMed
    1. Portugal I., Alencar P., Cowan D. The use of machine learning algorithms in recommender systems: A systematic review. Expert Syst. Appl. 2018;97:205–227. doi: 10.1016/j.eswa.2017.12.020. - DOI
    1. Sarker I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021;2:160. doi: 10.1007/s42979-021-00592-x. - DOI - PMC - PubMed
    1. Bouneffouf D., Rish I., Aggarwal C. Survey on applications of multi-armed and contextual bandits; Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC); Glasgow, UK. 19–24 July 2020; pp. 1–8.
    1. Trovò F., Paladino S., Restelli M., Gatti N. Improving multi-armed bandit algorithms in online pricing settings. Int. J. Approx. Reason. 2018;98:196–235. doi: 10.1016/j.ijar.2018.04.006. - DOI

LinkOut - more resources