4.7 Article

Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNSRE.2017.2700395

Keywords

Artificial intelligence; human-machine teaming; Functional Electrical Stimulation; rehabilitation; reinforcement learning

Funding

  1. National Institutes of Health [TRN030167]
  2. Veterans Administration Rehabilitation Research and Development Predoctoral Fellowship-Reinforcement Learning Control for an Upper-Extremity Neuroprosthesis
  3. Ardiem Medical Arm Control Device [W81XWH0720044]
  4. U.S. Department of Defense (DOD) [W81XWH0720044] Funding Source: U.S. Department of Defense (DOD)

Ask authors/readers for more resources

Functional Electrical Stimulation (FES) employs neuroprostheses to apply electrical current to the nerves and muscles of individuals paralyzed by spinal cord injury to restore voluntary movement. Neuroprosthesis controllers calculate stimulation patterns to produce desired actions. To date, no existing controller is able to efficiently adapt its control strategy to the wide range of possible physiological arm characteristics, reaching movements, and user preferences that vary over time. Reinforcement learning (RL) is a control strategy that can incorporate human reward signals as inputs to allow human users to shape controller behavior. In this paper, ten neurologically intact human participants assigned subjective numerical rewards to train RL controllers, evaluating animations of goal-oriented reaching tasks performed using a planar musculoskeletal human arm simulation. The RL controller learning achieved using human trainers was compared with learning accomplished using human-like rewards generated by an algorithm; metrics included success at reaching the specified target; time required to reach the target; and target overshoot. Both sets of controllers learned efficiently and with minimal differences, significantly outperforming standard controllers. Reward positivity and consistency were found to be unrelated to learning success. These results suggest that human rewards can be used effectively to train RL-based FES controllers.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available