PERHAPS A GIFT VOUCHER FOR MUM?: MOTHER'S DAY

Close Notification

Your cart does not contain any items

Model-Based Reinforcement Learning

From Data to Continuous Actions with a Python-based Toolbox

Milad Farsi (University of Tabriz, Iran; University of Waterloo, Canada) Jun Liu (University of Waterloo, Canada) Maria Domenica Di Benedetto

$223.95

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
Wiley-IEEE Press
06 December 2022
Model-Based Reinforcement Learning Explore a comprehensive and practical approach to reinforcement learning

Reinforcement learning is an essential paradigm of machine learning, wherein an intelligent agent performs actions that ensure optimal behavior from devices. While this paradigm of machine learning has gained tremendous success and popularity in recent years, previous scholarship has focused either on theory—optimal control and dynamic programming – or on algorithms—most of which are simulation-based.

Model-Based Reinforcement Learning provides a model-based framework to bridge these two aspects, thereby creating a holistic treatment of the topic of model-based online learning control. In doing so, the authors seek to develop a model-based framework for data-driven control that bridges the topics of systems identification from data, model-based reinforcement learning, and optimal control, as well as the applications of each. This new technique for assessing classical results will allow for a more efficient reinforcement learning system. At its heart, this book is focused on providing an end-to-end framework—from design to application—of a more tractable model-based reinforcement learning technique.

Model-Based Reinforcement Learning readers will also find:

A useful textbook to use in graduate courses on data-driven and learning-based control that emphasizes modeling and control of dynamical systems from data

Detailed comparisons of the impact of different techniques, such as basic linear quadratic controller, learning-based model predictive control, model-free reinforcement learning, and structured online learning

Applications and case studies on ground vehicles with nonholonomic dynamics and another on quadrator helicopters

An online, Python-based toolbox that accompanies the contents covered in the book, as well as the necessary code and data

Model-Based Reinforcement Learning is a useful reference for senior undergraduate students, graduate students, research assistants, professors, process control engineers, and roboticists.

By:   , ,
Series edited by:  
Imprint:   Wiley-IEEE Press
Country of Publication:   United States
Dimensions:   Height: 229mm,  Width: 152mm,  Spine: 16mm
Weight:   631g
ISBN:   9781119808572
ISBN 10:   111980857X
Series:   IEEE Press Series on Control Systems Theory and Applications
Pages:   272
Publication Date:  
Audience:   Professional and scholarly ,  College/higher education ,  Undergraduate ,  Further / Higher Education
Format:   Hardback
Publisher's Status:   Active

Milad Farsi received the B.S. degree in Electrical Engineering (Electronics) from the University of Tabriz in 2010. He obtained his M.S. degree also in Electrical Engineering (Control Systems) from the Sahand University of Technology in 2013. Moreover, he gained industrial experience as a Control System Engineer between 2012 and 2016. Later, he acquired the Ph.D. degree in Applied Mathematics from the University of Waterloo, Canada, in 2022, and he is currently a Postdoctoral Fellow at the same institution. His research interests include control systems, reinforcement learning, and their applications in robotics and power electronics. Jun Liu received the Ph.D. degree in Applied Mathematics from the University of Waterloo, Canada, in 2010. He is currently an Associate Professor of Applied Mathematics and a Canada Research Chair in Hybrid Systems and Control at the University of Waterloo, Canada, where he directs the Hybrid Systems Laboratory. From 2012 to 2015, he was a Lecturer in Control and Systems Engineering at the University of Sheffield. During 2011 and 2012, he was a Postdoctoral Scholar in Control and Dynamical Systems at the California Institute of Technology. His main research interests are in the theory and applications of hybrid systems and control, including rigorous computational methods for control design with applications in cyber-physical systems and robotics.

See Also