Asian Control Conference Workshop on Reinforcement Learning and Control
Kitakyushu International Conference Center, Kitakyusyu, Fukuoka, Japan; June 9, 2019.
This workshop sponsored by ASian Control Confernce (ASCC) 2019 intends to present a tutorial on the interaction of reinforcement learning (RL) and control and some advanced topics on the associated applications. During the workshop duration, background of RL is given first; basic building blocks of RL such as Markov decision process, Bellman equations, value iteration and policy iteration are introduced; the relationship between RL and control is investigated from the perspectives of linear quadratic regulation, differential dynamic programming, and linear quadratic Gaussian; a blackbox optimization approach for reinforcement learning problems is investigated. Two advanced topics on RL and control are given: 1) RL for robot control and 2) risk assessment and nursing assistant using deep RL.
The theme of this workshop can be considered as one application of machine learning, a subfield of artificial intelligence. The workshop will provide opportunities for attendees from different research areas (such as control, optimization, and machine learning) to meet each other, network and share best practices in the research of RL and control.
Graduate students, entry-level engineers, and young/senior researchers who are interested in applying reinforcement learning to control system designs.
Prof. Wei-Yu Chiu
Department of Electrical Engineering, National Tsing Hua University
No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan
Prof. Takamitsu Matsubara
Graduate School of Information Science, Nara Institute of Science and Technology, Japan
Topics and Speakers
Tutorial on Reinforcement Learning (40 min)
Prof. Wei-Yu Chiu, Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan.
Abstract: This talk presents basis concepts of RL, including its brief history, limitations of reinforcement learning, Markov decision processes, Bellman equations, value and policy iteration methods, linear quadratic regulation/differential dynamic programming/linear quadratic Gaussian in the framework of reinforcement learning, and online resources on reinforcement learning and control.
Bio: Wei-Yu Chiu received the Ph.D. degree in communications engineering from National Tsing Hua University (NTHU), Hsinchu, Taiwan, in 2010. He is currently an Assistant Professor of electrical engineering with NTHU. His research interests include multiobjective optimization and machine learning, and their applications to various fields, including control systems, robotics, and smart grids. Dr. Chiu was the recipient of the Outstanding Young Automatic Control Engineering Award bestowed by Chinese Automatic Control Society in 2016, and the Outstanding Young Scholar Academic Award bestowed by Taiwan Association of Systems Science and Engineering in 2017. Since 2015 he has been serving as an Organizer/Chair for the International Workshop on Integrating Communications, Control, and Computing Technologies for Smart Grid (ICT4SG). He is a Lead Guest Editor for a few feature topics/special issues in IEEE magazines and journals, and a Subject Editor for IET Smart Grid.
Tutorial on Blackbox Optimization Approach for Reinforcement Learning Problems (40 min)
Prof. Shiro Yano, Institute of Engineering Division of Advanced Technology & Computer Science, Tokyo University of Agriculture and Technology, Japan.
Abstract: This tutorial provides the blackbox optimization methods as an alternative approach for reinforcement learning problems. It includes the topics such as direct policy search, some specific parameterized policies, relationship between Bayesian method and Reinforcement learning algorithms, and brief application results.
Bio: Shiro Yano received the M.E. and Ph.D. degree in precision engineering from The Univerity of Tokyo, Tokyo, Japan in 2012. From 2012 to 2014, he studied a decomposition method for the large scale optimization problem such as decentralized smart grid system as a senior researcher in Ritsumeikan University, Japan. He is currently an Assistant Professor of Information Science with Tokyo University of Agriculture and Technology, Tokyo, Japan. He received the Best paper award in IEEE International Conference of Micro-Nano Mechatronics Human Science, in 2016. His current research interests include blackbox optimization, reinforcement learning, statistical inference and their applications to bioinformatics and medical devices.
Case study I: Sample-Efficient Reinforcement Learning for Real-World Robot Control (40 min)
Prof. Takamitsu Matsubara, Graduate School of Information Science, Nara Institute of Science and Technology, Japan.
Abstract: RL has been applied in a broad range of robot control scenarios, however, its application to real-world robots still remains difficult since a prohibitively long-time experiment for collecting sufficient data samples is often required. Therefore, developing sample-efficient RL algorithms is of primary importance. In this talk, I introduce some of the sample-efficient (deep) RL algorithms we developed recently and show their real-world applications to hand dexterous manipulation, cloth manipulation, exoskeleton assistive strategy, and real-boat autopilot and so on.
Bio: Takamitsu Matsubara received his M.E. in information science from the Nara Institute of Science and Technology (NAIST), Nara, Japan, in 2005, and a Ph.D. in information science from the Nara Institute of Science and Technology, Nara, Japan, in 2007. From 2005 to 2007, he was a research fellow (DC1) of the Japan Society for the Promotion of Science. From 2013 to 2014, he was a visiting researcher of the Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands. He is currently an associate professor and the head of Robot Learning Laboratory at NAIST and a visiting researcher at the ATR Computational Neuroscience Laboratories, Kyoto, Japan, and a visiting researcher at National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan. His research interests are machine learning and control theory for robotics.
Case study II: Automatic Fall Risk Assessment and Nursing Assistant System Using Deep Reinforcement Learning (30 min)
Mr. Takaaki Namba, Department of Mechanical Systems Engineering, Graduate School of Engineering, Nagoya University, Japan.
Abstract: Preventing the patients from falling in the hospital is one of the important tasks in the clinical safety field worldwide. Especially elderly patients over 60 years old have remarkable fall incidents. However, medical staffs can't keep eye on all the patients at all the time. Therefore, we propose and study a nursing system including automatic primary screening, real-time risk assessment, and risk reduction measures for the patients to assist medical staffs. In this talk, as a case study of RL, I introduce the possibility, problems, countermeasures, risks, and risk reduction measures of Deep-RL, when applying it in the hospitals.
Bio: Takaaki Namba received his B.S. in physics from Nagoya University, Japan, in 1990. From 1990 to now he worked for Panasonic Advanced Technology Development Co., Ltd. (formerly Matsushita Electric Industrial Information Systems Research and Laboratory Nagoya Co., Ltd.). He made research and development on embedded software for AVC(Audio/Video/Communication) equipment, robot simulator, and robot control as a project leader. He entranced into the doctoral course in Mechanical Systems Engineering (formerly Mechanical Science and Engineering), Graduate School of Engineering, Nagoya University in 2014. Currently, he is a Ph.D. candidate at Nagoya University. His research interests include safety, risk assessment, machine intelligence, and assistive robotics in the field of clinical safety. He is a member of IEEE, JSME, SICE, RSJ, JSAI, ITE, JSQSH, and JPSCS.