4.7 Article

Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism

期刊

IEEE TRANSACTIONS ON MOBILE COMPUTING
卷 22, 期 5, 页码 3017-3030

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TMC.2021.3125949

关键词

Inference algorithms; Delays; Partitioning algorithms; Computational modeling; Task analysis; Approximation algorithms; Parallel processing; Mobile edge computing (MEC); DNN model inference provisioning; throughput maximization; Intelligent IoT devices; approximation and online algorithms; delay-aware DNN inference; DNN partitioning; inference parallelism; computing and bandwidth resource allocation and optimization; algorithm design and analysis

向作者/读者索取更多资源

Mobile Edge Computing (MEC) is a promising paradigm that offloads compute-intensive tasks to MEC networks, providing high-performance processing for mobile applications. This study focuses on the acceleration of DNN inference in MEC networks through DNN partitioning and multi-thread execution parallelism. The research develops novel algorithms for maximizing the number of delay-aware DNN service requests admitted, both in offline and online scenarios. Experimental simulations demonstrate the promising performance of the proposed algorithms.
Mobile Edge Computing (MEC) has emerged as a promising paradigm catering to overwhelming explosions of mobile applications, by offloading compute-intensive tasks to MEC networks for processing. The surging of deep learning brings new vigor and vitality to shape the prospect of intelligent Internet of Things (IoT), and edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a user request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed in the local IoT device of the request, and another part is processed in a cloudlet (edge server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet to which the request is assigned. In this paper, we study a novel delay-aware DNN inference throughput maximization problem with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and multi-thread execution parallelism. Specifically, we consider the problem under both offline and online request arrival settings: a set of DNN inference requests is given in advance, and a sequence of DNN inference requests arrives one by one without the knowledge of future arrivals, respectively. We first show that the defined problems are NP-hard. We then devise a novel constant approximation algorithm for the problem under the offline setting. We also propose an online algorithm with a provable competitive ratio for the problem under the online setting. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed algorithms are promising

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据