Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Dec:33:9818-9829.

Inference for Batched Bandits

Affiliations

Inference for Batched Bandits

Kelly W Zhang et al. Adv Neural Inf Process Syst. 2020 Dec.

Abstract

As bandit algorithms are increasingly utilized in scientific studies and industrial applications, there is an associated increasing need for reliable inference methods based on the resulting adaptively-collected data. In this work, we develop methods for inference on data collected in batches using a bandit algorithm. We first prove that the ordinary least squares estimator (OLS), which is asymptotically normal on independently sampled data, is not asymptotically normal on data collected using standard bandit algorithms when there is no unique optimal arm. This asymptotic non-normality result implies that the naive assumption that the OLS estimator is approximately normal can lead to Type-1 error inflation and confidence intervals with below-nominal coverage probabilities. Second, we introduce the Batched OLS estimator (BOLS) that we prove is (1) asymptotically normal on data collected from both multi-arm and contextual bandits and (2) robust to non-stationarity in the baseline reward.

PubMed Disclaimer

Figures

Figure 6:
Figure 6:
Stationary Setting: Type-1 error for a two-sided test of H0 : Δ = 0 vs. H1 : Δ ≠ 0 (α = 0.05). We set β1 = β0 = 0, n = 25, and a clipping constraint of 0.1πt(n)0.9. We use 100k Monte Carlo simulations and standard errors are < 0.001.
Figure 7:
Figure 7:
Stationary Setting: Power for a two-sided test of H0 : Δ = 0 vs. H1 : Δ ≠ 0 (α = 0.05). We set β1 = 0, β0 = 0.25, n = 25, and a clipping constraint of 0.1πt(n)0.9. We use 100k Monte Carlo simulations and standard errors are < 0.002. We account for Type-1 error inflation as described in Section 6.
Figure 8:
Figure 8:
Nonstationary setting: The two upper plots display the power of estimators for a two-sided test of H0 : ∀t ∈ [1: T], βt,1βt,0 = 0 vs. H1 : ∃t ∈ [1: T], βt,1βt,0 ≠ 0 (α = 0.05). The two lower plots display two treatment effect trends; the left plot considers a decreasing trend (quadratic function) and the right plot considers a oscillating trend (sin function). We set n = 25, and a clipping constraint of 0.1πt(n)0.9. We use 100k Monte Carlo simulations and standard errors are < 0.002.
Figure 1:
Figure 1:
Empirical distribution of the Z-statistic (σ2 is known) of the OLS estimator for the margin. All simulations are with no margin (β1 = β0 = 0); N(0,1) rewards; T = 25; and n = 100. For ϵ-greedy, ϵ = 0.1.
Figure 2:
Figure 2:
Empirical undercoverage probabilities (coverage probability below 95%) of confidence intervals using on a normal approximation for the OLS estimator. We use Thompson Sampling with N(0,1) priors, a clipping constraint of 0.05πt(n)0.95, N(0,1) rewards, T = 25, and known σ2. Standard errors are < 0.001.
Figure 3:
Figure 3:
Stationary Setting: Type-1 error for a two-sided test of H0 : Δ = 0 vs. H1 : Δ ≠ 0 (α = 0.05). We set β1 = β0 = 0, n = 25, and a clipping constraint of 0.1πt(n)0.9. We use 100k Monte Carlo simulations and standard errors are < 0.001.
Figure 4:
Figure 4:
Stationary Setting: Power for a two-sided test of H0 : Δ = 0 vs. H1 : Δ ≠ 0 (α = 0.05). We set β1 = 0, β0 = 0.25, n = 25, and a clipping constraint of 0.1πt(n)0.9. We use 100k Monte Carlo simulations and standard errors are < 0.002. We account for Type-1 error inflation as described in Section 6.
Figure 5:
Figure 5:
Non-stationary baseline reward setting: Type-1 error (upper left) and power (upper right) for a two-sided test of H0 : Δ = 0 vs. H1 : Δ ≠ 0 (α = 0.05). In the lower two plots we plot the expected rewards for each arm; note the margin is constant across batches. We use n = 25 and a clipping constraint of 0.1πt(n)0.9. We use 100k Monte Carlo simulations and standard errors are < 0.002.

References

    1. Abbasi-Yadkori Yasin, Pál Dávid, and Szepesvári Csaba. Improved algorithms for linear stochastic bandits. In Advances in Neural Information Processing Systems, pages 2312–2320, 2011.
    1. Agarwal Arpit, Agarwal Shivani, Assadi Sepehr, and Khanna Sanjeev. Learning with limited rounds of adaptivity: Coin tossing, multi-armed bandits, and ranking from pairwise comparisons. In Conference on Learning Theory, pages 39–75, 2017.
    1. Amemiya Takeshi. Advanced Econometrics. Harvard University Press, 1985.
    1. Brannath W, Gutjahr G, and Bauer P. Probabilistic foundation of confirmatory adaptive designs. Journal of the American Statistical Association, 107(498):824–832, 2012.
    1. Deshpande Yash, Javanmard Adel, and Mehrabi Mohammad. Online debiasing for adaptively collected high-dimensional data. arXiv preprint arXiv:1911.01040, 2019.

LinkOut - more resources