Article
Computer Science, Artificial Intelligence
Vilde B. Gjaerum, Inga Strumke, Jakob Lover, Timothy Miller, Anastasios M. Lekkas
Summary: This paper provides an overview and analysis of methods for building model trees to explain deep reinforcement learning agents solving robotics tasks. The study finds that multiple outputs are important for understanding the dependencies of output features, and introducing domain knowledge via a hierarchy among input features improves accuracy and speeds up the building process.
Article
Computer Science, Artificial Intelligence
Josiah P. Hanna, Siddharth Desai, Haresh Karnan, Garrett Warnell, Peter Stone
Summary: Grounded simulation learning is a promising framework that alters simulators to better match the real world, enabling successful transfer of policies learned in simulation to the physical world. The new GAT algorithm demonstrated superior control policy learning capabilities in controlled experiments compared to traditional hand-coded methods.
Article
Robotics
Paul Maria Scheikl, Eleonora Tagliabue, Balazs Gyenes, Martin Wagner, Diego Dall'Alba, Paolo Fiorini, Franziska Mathis-Ullrich
Summary: Automation has potential to assist surgeons in robotic interventions by shifting their mental work load from visuomotor control to high level decision making. Reinforcement learning shows promise in learning complex visuomotor policies, especially in simulation environments. This letter introduces a successful application of visual sim-to-real transfer for robotic manipulation of deformable objects, bridging the sim-to-real gap in surgical robotics.
IEEE ROBOTICS AND AUTOMATION LETTERS
(2023)
Article
Computer Science, Information Systems
Dongfen Li, Lichao Meng, Jingjing Li, Ke Lu, Yang Yang
Summary: Deep reinforcement learning has shown excellent performance in robot control, video games, and multi-agent systems. However, most existing models lack generalization capability, limiting their flexibility in real-world applications. To address this issue, this study proposes a two-stage model that focuses on learning adaptation to visual environment changes before optimizing behavioral policies.
INFORMATION SCIENCES
(2022)
Article
Computer Science, Artificial Intelligence
Man-Je Kim, Jun Suk Kim, Chang Wook Ahn
Summary: Reinforcement learning is promising in machine learning, but its applicability in real-time environment is limited due to short response time, high computational complexity, and learning instability. This paper proposes a new method called Evolving Population, which improves reinforcement learning performance by optimizing hyperparameters and available actions. The method utilizes an iterative structure based on evolutionary strategy to optimize these elements, and its performance is validated in an environment with real-time properties and large branching factors.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Engineering, Mechanical
Yuan Tian, Manuel Arias Chao, Chetan Kulkarni, Kai Goebel, Olga Fink
Summary: The study introduces a novel framework for inferring model parameters based on reinforcement learning, showing superior speed and robustness in real-world conditions, with high inference accuracy.
MECHANICAL SYSTEMS AND SIGNAL PROCESSING
(2022)
Article
Robotics
Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine
Summary: Deep reinforcement learning has shown promise in enabling physical robots to learn complex skills in the real world, which presents numerous challenges in perception and movement. Real-world robotics provides a unique domain for evaluating deep RL algorithms, addressing challenges that are often overlooked in mainstream RL research.
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
(2021)
Article
Computer Science, Artificial Intelligence
Menglong Yang, Fangrui Wu, Wei Li
Summary: This paper proposes a novel real-time stereo matching method called RLStereo, based on reinforcement learning, which iteratively conducts a few actions to search the disparity value for each pair of stereo images after training. Experimental results demonstrate the high performance of the RLStereo method in terms of speed and accuracy.
IEEE TRANSACTIONS ON IMAGE PROCESSING
(2021)
Review
Computer Science, Artificial Intelligence
Juan Manuel Davila Delgado, Lukumon Oyedele
Summary: This paper aims to consolidate and summarise research knowledge at the intersection of robotics, reinforcement learning, and construction. The study found that reinforcement and imitation learning approaches have not been widely explored in robotics for construction, and the unstructured and dynamic nature of construction poses challenges for these approaches.
ADVANCED ENGINEERING INFORMATICS
(2022)
Article
Robotics
Yuan Bi, Zhongliang Jiang, Yuan Gao, Thomas Wendler, Angelos Karlas, Nassir Navab
Summary: This paper proposes a simulation-based reinforcement learning framework for the real-world navigation of ultrasound probes towards standard longitudinal views of vessels. The use of UNet for binary masking allows the RL agent to be applied in real scenarios without further training. Additionally, a multi-modality state representation structure and a novel standard view recognition approach based on the minimum bounding rectangle are introduced to improve navigation accuracy and stability.
IEEE ROBOTICS AND AUTOMATION LETTERS
(2022)
Article
Engineering, Multidisciplinary
Qiaofeng Ou, Qunqun Xie, Fuhan Chen, Jianhao Peng, Bangshu Xiong
Summary: In this study, a camera calibration method based on reinforcement learning is proposed, which utilizes a Markov decision process model and a reward function to optimize the target locations and poses in the calibration process. This method effectively improves the success rate of large FOV camera calibration and solves issues related to relying on personal experiences, low calibration accuracy, and poor stability caused by target placement.
Article
Robotics
Majda Moussa, Giovanni Beltrame
Summary: This paper introduces a real-time approximator for Maxwell's equations based on deep neural networks, which predicts the distribution of a virtual magnetic field. The effectiveness of the approximator is demonstrated through physics-based simulations and real-world experiments, showing its application in various environments and potential for extension to 3D problems.
IEEE ROBOTICS AND AUTOMATION LETTERS
(2021)
Article
Mathematics, Applied
Erhan Bayraktar, Ali Devran Kara
Summary: We propose a Q learning algorithm for continuous time stochastic control problems. The algorithm discretizes the state and control action spaces and uses the sampled state process to approximate the optimality equation. We provide upper bounds for the approximation error and performance loss, which are functions of the discretization parameters and reveal the effects of different levels of approximation.
SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE
(2023)
Article
Robotics
Tianyi Zhang, Matthew Johnson-Roberson
Summary: This study proposes a method to address the challenging task of robot localization in GPS denied environments by localizing image observations in a 2D multimodal geospatial map. The experiments show that the proposed method performs better on smaller-scale multimodal maps, is more computationally efficient for real-time applications, and can be used directly in concert with state estimation pipelines.
IEEE ROBOTICS AND AUTOMATION LETTERS
(2022)
Article
Engineering, Industrial
Chen Li, Qing Chang
Summary: A novel control method is proposed for multi-stage production systems to improve system efficiency by dynamically changing the cycle time of individual machines. The method integrates distributed feedback control with a reinforcement learning scheme, and shows significant improvements in overall profits and energy savings.
JOURNAL OF MANUFACTURING SYSTEMS
(2022)