4.8 Article

Achieving Low-Latency Human-to-Machine (H2M) Applications: An Understanding of H2M Traffic for AI-Facilitated Bandwidth Allocation

Journal

IEEE INTERNET OF THINGS JOURNAL
Volume 8, Issue 1, Pages 626-635

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2020.3007947

Keywords

Bandwidth; Haptic interfaces; Channel allocation; Resource management; Optical network units; Solid modeling; Bandwidth allocation scheme; haptic communication; human-to-machine applications; low latency; tactile Internet

Ask authors/readers for more resources

This article presents an experimental study on human control and haptic feedback traffic in H2M applications, proposing an artificial intelligence-facilitated low-latency bandwidth allocation scheme (ALL) for emerging H2M applications. By prioritizing bandwidth allocation for control and feedback traffic over content traffic, the ALL scheme significantly reduces latency for H2M applications.
Human-controlled and haptic feedback data in emerging Tactile Internet human-to-machine (H2M) applications require stringent low-latency transmission. Understanding the traffic features of the new applications is vital in innovating network control and resource allocation strategies to meet their latency demand. In this article, we present our experimental study on human control and haptic feedback traffic in H2M applications and investigate novel bandwidth allocation schemes in supporting converged H2M application delivery over access networks. We introduce our haptic experiment system, the developed H2M applications, and analyze the control and feedback traffic traces collected. Then, exploiting the correlation between real-time control and feedback reported in our analysis, we propose an artificial intelligence-facilitated low-latency bandwidth allocation (ALL) scheme for emerging H2M applications. ALL provisions priority-differentiated bandwidth allocation for aggregated H2M and conventional content-centric applications over future access networks. By using ALL, the central office preallocates bandwidth for control and its corresponding feedback traffic interactively and prioritizes their transmission over content traffic. This expedites H2M application delivery by eliminating the report-then-grant process in the existing bandwidth allocation schemes. Via extensive simulations injected with experimental traffic traces, we comprehensively investigate the latency performance of ALL and existing schemes. Our results validate the superior capability of ALL in reducing and constraining latency for H2M applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available