Bargains! PROMOTIONS

Close Notification

Your cart does not contain any items

Bandit Convex Optimisation

Tor Lattimore (Google DeepMind, London)

$163.95   $131.14

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
Cambridge University Press
19 March 2026
This comprehensive reference brings readers to the frontier of research on bandit convex optimization or zeroth-order convex optimization. The focus is on theoretical aspects, with short, self-contained chapters covering all the necessary tools from convex optimization and online learning, including gradient-based algorithms, interior point methods, cutting plane methods and information-theoretic machinery. The book features a large number of exercises, open problems and pointers to future research directions, making it ideal for students as well as researchers.
By:  
Imprint:   Cambridge University Press
Country of Publication:   United Kingdom
ISBN:   9781009607599
ISBN 10:   1009607596
Pages:   280
Publication Date:  
Audience:   College/higher education ,  Professional and scholarly ,  Primary ,  Undergraduate
Format:   Hardback
Publisher's Status:   Active

Tor Lattimore is a researcher at Google DeepMind working on reinforcement learning, bandits, optimisation and the theory of machine learning. He is the co-author of an introductory book on bandit algorithms and has published nearly 100 conference and journal articles. He is an action editor for the Journal of Machine Learning Research.

Reviews for Bandit Convex Optimisation

'A landmark text on bandit convex optimization by an authority in the field. This book develops the full theory of zeroth-order online convex optimization-where one must learn from noisy function values without gradients-establishing regret bounds and presenting elegant algorithms from gradient descent to cutting planes, multiplicative updates, and Newton methods. Touching on all areas central to advanced optimization, it is an essential companion for researchers, offering both the conceptual foundations and the algorithmic toolkit that continue to drive progress in online convex optimization and mathematical optimization more broadly.' Elad Hazan, Princeton University


See Also