4.7 Article

Position-Aware Participation-Contributed Temporal Dynamic Model for Group Activity Recognition

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2021.3085567

关键词

Feature extraction; Activity recognition; Logic gates; Spatiotemporal phenomena; Dynamics; Computer vision; Visualization; Attention mechanism; graph neural network (GNN); group activity recognition (GAR); scene understanding

资金

  1. National Key Research and Development Program of China [2018AAA0102002]
  2. National Natural Science Foundation of China [61732007, 62072245, 61932020]

向作者/读者索取更多资源

The study proposes a novel Position-aware Participation-Contributed Temporal Dynamic Model, which focuses on capturing different types of key actors and their behaviors in group activities. By incorporating position-aware interaction modules and aggregation long short-term memory, the model aims to improve the recognition of key actors' contributions to the group activities in video clips.
Group activity recognition (GAR) aiming at understanding the behavior of a group of people in a video clip has received increasing attention recently. Nevertheless, most of the existing solutions ignore that not all the persons contribute to the group activity of the scene equally. That is to say, the contribution from different individual behaviors to group activity is different; meanwhile, the contribution from people with different spatial positions is also different. To this end, we propose a novel Position-aware Participation-Contributed Temporal Dynamic Model ((PCTDM)-C-2), in which two types of the key actor are constructed and learned. Specifically, we focus on the behaviors of key actors, who maintain steady motions (long moving time, called long motions) or display remarkable motions (but closely related to other people and the group activity, called flash motions) at a certain moment. For capturing long motions, we rank individual motions according to their intensity measured by stacking optical flows. For capturing flash motions that are closely related to other people, we design a position-aware interaction module (PIM) that simultaneously considers the feature similarity and position information. Beyond that, for capturing flash motions that are highly related to the group activity, we also present an aggregation long short-term memory (Agg-LSTM) to fuse the outputs from PIM by time-varying trainable attention factors. Four widely used benchmarks are adopted to evaluate the performance of the proposed (PCTDM)-C-2 compared to the state of the art.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据