4.3 Article

Parallelizing Multimodal Background Modeling on a Low-Power Integrated GPU

出版社

SPRINGER
DOI: 10.1007/s11265-016-1111-z

关键词

Video surveillance; Low-power integrated GPU; Adaptive background modeling; Multimodal mean

向作者/读者索取更多资源

Background modeling techniques for embedded computer vision applications must balance accuracy, speed, and power. Basic background modeling techniques run quickly, but their accuracy is not sufficient for computer vision problems involving dynamic background. In contrast, adaptive background modeling techniques are more robust, but run more slowly. Due to its high inherent fine-grain parallelism, robust adaptive background modeling has been implemented on GPUs with significant performance improvements over CPUs. However, these implementations are infeasible in embedded applications due to the high power ratings of the targeted general-purpose GPU platforms. This paper focuses on exploiting fine-grain data parallelism and optimizing memory access patterns to target a low-cost adaptive background modeling algorithm multimodal mean (MMM) to a low-power GPU with thermal design power (TDP) of only 12 watts. The algorithm has comparable accuracy with the Gaussian mixture model (GMM) algorithm, but less computational and memory cost. It achieves a frame rate of 392 fps with a full VGA resolution (640x480) frame on the low-power integrated GPU NVIDIA ION. This is a 20x speed-up of the MMM algorithm compared to the embedded CPU platform Intel Atom of comparable TDP. In addition, the MMM algorithm attains a 5-6x speed up over the GMM implementation on the ION GPU platform.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据