Journal
MECHATRONICS
Volume 24, Issue 8, Pages 1021-1030Publisher
PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.mechatronics.2014.08.001
Keywords
Robot control; Time optimal motion; Optimization; Reinforcement learning; Natural actor-critic
Categories
Funding
- Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen) [IWT-SBO 80032]
Ask authors/readers for more resources
In this research, time optimal control is considered for the hit motion of a badminton robot during a serve operation. Even though the robot always starts at rest in a given position, it has to move to a target position where the target velocity is not zero, as the robot has to hit the shuttle at that point. The goal is to reach this target state as quickly as possible, yet without violating the limitations of the actuator. To find controllers satisfying these requirements, both model-based and model-free controllers have been developed, with the model-free controllers employing a Natural Actor-Critic (NAC) reinforcement learning algorithm. The model-based controllers can immediately achieve the desired motions relying on prior model information, while the model-free methods are shown to yield the desired robot motions after about 200 trials. However, in order to achieve this result, a good choice of the reward function is essential. To illustrate this choice and validate the resulting controller, a simulation study is presented in which the model-based results are compared to those obtained with two different reward functions. (C) 2014 Elsevier Ltd. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available