>
Bandit Algorithms

Bandit Algorithms

  • £20.59
  • Save £19


Tor Lattimore, Csaba Szepesvári
Cambridge University Press, 7/16/2020
EAN 9781108486828, ISBN10: 1108486827

Hardcover, 536 pages, 25.1 x 18.3 x 3.3 cm
Language: English

Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.

1. Introduction
2. Foundations of probability
3. Stochastic processes and Markov chains
4. Finite-armed stochastic bandits
5. Concentration of measure
6. The explore-then-commit algorithm
7. The upper confidence bound algorithm
8. The upper confidence bound algorithm
asymptotic optimality
9. The upper confidence bound algorithm
minimax optimality
10. The upper confidence bound algorithm
Bernoulli noise
11. The Exp3 algorithm
12. The Exp3-IX algorithm
13. Lower bounds
basic ideas
14. Foundations of information theory
15. Minimax lower bounds
16. Asymptotic and instance dependent lower bounds
17. High probability lower bounds
18. Contextual bandits
19. Stochastic linear bandits
20. Confidence bounds for least squares estimators
21. Optimal design for least squares estimators
22. Stochastic linear bandits with finitely many arms
23. Stochastic linear bandits with sparsity
24. Minimax lower bounds for stochastic linear bandits
25. Asymptotic lower bounds for stochastic linear bandits
26. Foundations of convex analysis
27. Exp3 for adversarial linear bandits
28. Follow the regularized leader and mirror descent
29. The relation between adversarial and stochastic linear bandits
30. Combinatorial bandits
31. Non-stationary bandits
32. Ranking
33. Pure exploration
34. Foundations of Bayesian learning
35. Bayesian bandits
36. Thompson sampling
37. Partial monitoring
38. Markov decision processes.