4.8 Article

Simulation based QoS aware dynamic caching scheme for heterogeneous content requests in vehicular edge computing

出版社

ELSEVIER
DOI: 10.1016/j.jksuci.2023.101813

关键词

Edge cache adjustment; Mobile edge computing; Internet of vehicles; Heterogeneous content request; Reinforcement learning; Quality of service

向作者/读者索取更多资源

The Internet of Vehicles (IoV) has been rapidly expanding in recent years, and the integration of edge servers has become a common practice to provide better service quality. However, conventional cache policies are often inadequate for IoV applications due to their inability to effectively handle diverse content requests, high-speed vehicle mobility, and unstable network connections. This study categorizes content requests into latency-sensitive and bandwidth-sensitive, evaluates service quality, and proposes an innovative edge cache scheme based on deep reinforcement learning to dynamically adapt to the changing IoV environment.
The Internet of Vehicles (IoV) has witnessed a substantial surge in its expansion during recent years, with the integration of edge servers emerging as an increasingly prevalent practice to cater to content requests that demand a superior quality of service. Nonetheless, conventional cache policies often prove inadequate for IoV applications, primarily due to their incapacity to effectively handle a diverse array of content requests, the high-speed mobility of vehicles, and the inherent instability of network connections. Within the scope of this study, we commence by bifurcating various content requests into two distinct categories: latency-sensitive content request and bandwidth-sensitive content request. Subsequently, we establish a model to evaluate the service quality under the joint consideration of different types of content requests, the mobility of vehicles and storage capacity of edge nodes. Furthermore, we transform the quality-of-service (QoS) penalty function into a system reward function. This adaption enables us to propose an innovative edge cache scheme founded upon the Deep Deterministic Policy Gradient (DDPG) algorithm of Reinforcement Learning (RL) method, which empowers dynamic adjustments in response to the evolving IoV environment. To validate the effectiveness of our proposed approach, we harness the Simulation of Urban Mobility (SUMO) traffic simulation software and construct a traffic road scenario based on a specific part segment of Nanjing Beltway. A wide-ranging set of contrast experiments was performed to ensure the improved performance of the DDPG based deep reinforcement learning method. Simulation experimental results show that the proposed algorithm converges quickly and outperforms existing algorithms in terms of service quality-hit ratio for latency-sensitive content and transmission speed for bandwidth-sensitive content.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据