4.3 Article Proceedings Paper

Deep Reinforcement Learning for Spacecraft Proximity Operations Guidance

Journal

JOURNAL OF SPACECRAFT AND ROCKETS
Volume 58, Issue 2, Pages 254-264

Publisher

AMER INST AERONAUTICS ASTRONAUTICS
DOI: 10.2514/1.A34838

Keywords

-

Funding

  1. Natural Sciences and Engineering Research Council of Canada under the Postgraduate Scholarship-Doctoral [PGSD3-503919-2017]

Ask authors/readers for more resources

This paper presents a spacecraft guidance strategy using deep reinforcement learning, allowing learned guidance strategies to be transferred from simulation to reality. Results demonstrate comparable performance between training in simulation and applying the system in reality.
This paper introduces a guidance strategy for spacecraft proximity operations, which leverages deep reinforcement learning, a branch of artificial intelligence. This technique enables guidance strategies to be learned rather than designed. The learned guidance strategy feeds velocity commands to a conventional controller to track. Control theory is used alongside deep reinforcement learning to lower the learning burden and facilitate the transfer of the learned behavior from simulation to reality. In this paper, a proof-of-concept spacecraft pose tracking and docking scenario is considered, in simulation and experiment, to test the feasibility of the proposed approach. Results show that such a system can be trained entirely in simulation and transferred to reality with comparable performance.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available