Normal view MARC view ISBD view

Multi-Armed Bandits [electronic resource] : Theory and Applications to Online Learning in Networks / by Qing Zhao.

By: Zhao, Qing [author.].
Contributor(s): SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Learning, Networks, and Algorithms: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2020Edition: 1st ed. 2020.Description: XVIII, 147 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031792892.Subject(s): Artificial intelligence | Cooperating objects (Computer systems) | Programming languages (Electronic computers) | Telecommunication | Artificial Intelligence | Cyber-Physical Systems | Programming Language | Communications Engineering, NetworksAdditional physical formats: Printed edition:: No title; Printed edition:: No title; Printed edition:: No titleDDC classification: 006.3 Online resources: Click here to access online
Contents:
Preface -- Acknowledgments -- Introduction -- Bayesian Bandit Model and Gittins Index -- Variants of the Bayesian Bandit Model -- Frequentist Bandit Model -- Variants of the Frequentist Bandit Model -- Application Examples -- Bibliography -- Author's Biography.
In: Springer Nature eBookSummary: Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools-Bayesian and frequentist-of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.
    average rating: 0.0 (0 votes)
No physical items for this record

Preface -- Acknowledgments -- Introduction -- Bayesian Bandit Model and Gittins Index -- Variants of the Bayesian Bandit Model -- Frequentist Bandit Model -- Variants of the Frequentist Bandit Model -- Application Examples -- Bibliography -- Author's Biography.

Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools-Bayesian and frequentist-of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.

There are no comments for this item.

Log in to your account to post a comment.