期刊
NEUROCOMPUTING
卷 170, 期 -, 页码 257-266出版社
ELSEVIER SCIENCE BV
DOI: 10.1016/j.neucom.2014.09.092
关键词
Mobile robots; Hierarchical path planning; A* search; Reinforcement learning; Least squares policy iteration (LSPI); Optimality
In this paper, we propose a novel hierarchical path planning approach for mobile robot navigation in complex environments. The proposed approach has a two-level structure. In the first level, the A* algorithm based on grids is used to find a geometric path quickly and several path points are selected as subgoals for the next level. In the second level, an approximate policy iteration algorithm called least-squares policy iteration (LSPI) is used to learn a near-optimal local planning policy that can generate smooth trajectories under kinematic constraints of the robot. Using this near-optimal local planning policy, the mobile robot can find an optimized path by sequentially approaching the subgoals obtained in the first level. One advantage of the proposed approach is that the kinematic characteristics of the mobile robot can be incorporated into the LSPI-based path optimization procedure. The second advantage is that the LSPI-based local path optimizer uses an approximate policy iteration algorithm which has been proven to be data-efficient and stable. The training of the local path optimizer can use sample experiences collected randomly from any reasonable sampling distribution. Furthermore, the LSPI-based local path optimizer has the ability of dealing with uncertainties in the environment. For unknown obstacles, it just needs to replan the path in the second level rather than the whole planner. Simulations for path planning in various types of environments have been carried out and the results demonstrate the effectiveness of the proposed approach. (C) 2015 Elsevier B.V. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据