4.8 Article

Accuracy-Guaranteed Collaborative DNN Inference in Industrial IoT via Deep Reinforcement Learning

期刊

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
卷 17, 期 7, 页码 4988-4998

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TII.2020.3017573

关键词

Delays; Task analysis; Collaboration; Resource management; Inference algorithms; Sensors; Collaborative deep neural network (DNN) inference; deep reinforcement learning (RL); inference accuracy; sampling rate adaption

资金

  1. Natural Sciences and Engineering Research Council (NSERC) of Canada

向作者/读者索取更多资源

Collaboration among industrial IoT devices and edge networks is crucial for supporting computation-intensive DNN inference services with low delay and high accuracy. Sampling rate adaption plays a key role in minimizing service delay by dynamically configuring the sampling rates of IoT devices according to network conditions. The proposed deep RL-based algorithm, which transforms CMDP into MDP and incorporates an optimization subroutine, significantly reduces average service delay while maintaining long-term inference accuracy.
Collaboration among industrial Internet of Things (IoT) devices and edge networks is essential to support computation-intensive deep neural network (DNN) inference services, which require low delay and high accuracy. Sampling rate adaption, which dynamically configures the sampling rates of industrial IoT devices according to network conditions, is the key in minimizing the service delay. In this article, we investigate the collaborative DNN inference problem in industrial IoT networks. To capture the channel variation and task arrival randomness, we formulate the problem as a constrained Markov decision process (CMDP). Specifically, sampling rate adaption, inference task offloading, and edge computing resource allocation are jointly considered to minimize the average service delay while guaranteeing the long-term accuracy requirements of different inference services. Since CMDP cannot be directly solved by general reinforcement learning (RL) algorithms due to the intractable long-term constraints, we first transform the CMDP into an MDP by leveraging the Lyapunov optimization technique. Then, a deep RL-based algorithm is proposed to solve the MDP. To expedite the training process, an optimization subroutine is embedded in the proposed algorithm to directly obtain the optimal edge computing resource allocation. Extensive simulation results are provided to demonstrate that the proposed RL-based algorithm can significantly reduce the average service delay while preserving long-term inference accuracy with a high probability.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据