DTR-HAR: deep temporal residual representation for human activity recognition
Published 2021 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
DTR-HAR: deep temporal residual representation for human activity recognition
Authors
Keywords
-
Journal
VISUAL COMPUTER
Volume -, Issue -, Pages -
Publisher
Springer Science and Business Media LLC
Online
2021-02-19
DOI
10.1007/s00371-021-02064-y
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- A four-stream ConvNet based on spatial and depth flow for human action classification using RGB-D data
- (2020) D. Srihari et al. MULTIMEDIA TOOLS AND APPLICATIONS
- Improved human action recognition approach based on two-stream convolutional neural network model
- (2020) Congcong Liu et al. VISUAL COMPUTER
- Real-time multimodal ADL recognition using convolution neural networks
- (2020) Danushka Madhuranga et al. VISUAL COMPUTER
- Quadruplet Network With One-Shot Learning for Fast Visual Object Tracking
- (2019) Xingping Dong et al. IEEE TRANSACTIONS ON IMAGE PROCESSING
- A deep hybrid learning model to detect unsafe behavior: Integrating convolution neural networks and long short-term memory
- (2018) Lieyun Ding et al. AUTOMATION IN CONSTRUCTION
- Video Salient Object Detection via Fully Convolutional Networks
- (2018) Wenguan Wang et al. IEEE TRANSACTIONS ON IMAGE PROCESSING
- A Deep Network Solution for Attention and Aesthetics Aware Photo Cropping
- (2018) Wenguan Wang et al. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
- Convolutional Neural Networks and Long Short-Term Memory for skeleton-based human activity and hand gesture recognition
- (2018) Juan C. Núñez et al. PATTERN RECOGNITION
- Combining CNN streams of RGB-D and skeletal data for human activity recognition
- (2018) Pushpajit Khaire et al. PATTERN RECOGNITION LETTERS
- Action Recognition in Video Sequences using Deep Bi-Directional LSTM With CNN Features
- (2018) Amin Ullah et al. IEEE Access
- deepGesture: Deep learning-based gesture recognition scheme using motion sensors
- (2018) Ji-Hae Kim et al. DISPLAYS
- TS-LSTM and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition
- (2018) Chih-Yao Ma et al. SIGNAL PROCESSING-IMAGE COMMUNICATION
- Supervised spatio-temporal kernel descriptor for human action recognition from RGB-depth videos
- (2017) Maryam Asadi-Aghbolaghi et al. MULTIMEDIA TOOLS AND APPLICATIONS
- ImageNet Large Scale Visual Recognition Challenge
- (2015) Olga Russakovsky et al. INTERNATIONAL JOURNAL OF COMPUTER VISION
- Recovering Quantitative Remote Sensing Products Contaminated by Thick Clouds and Shadows Using Multitemporal Dictionary Learning
- (2014) Xinghua Li et al. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
- Evaluating spatiotemporal interest point features for depth-based action recognition
- (2014) Yu Zhu et al. IMAGE AND VISION COMPUTING
- Spatio-temporal feature extraction and representation for RGB-D human action recognition
- (2014) Jiajia Luo et al. PATTERN RECOGNITION LETTERS
- Using unlabeled data in a sparse-coding framework for human activity recognition
- (2014) Sourav Bhattacharya et al. Pervasive and Mobile Computing
- Learning human activities and object affordances from RGB-D videos
- (2013) Hema Swetha Koppula et al. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
- Human Daily Activity Recognition With Sparse Representation Using Wearable Sensors
- (2013) Mi Zhang et al. IEEE Journal of Biomedical and Health Informatics
- Multilevel Depth and Image Fusion for Human Activity Detection
- (2013) Bingbing Ni et al. IEEE Transactions on Cybernetics
- 3D Convolutional Neural Networks for Human Action Recognition
- (2012) Shuiwang Ji et al. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
- Speeded-Up Robust Features (SURF)
- (2008) Herbert Bay et al. COMPUTER VISION AND IMAGE UNDERSTANDING
Publish scientific posters with Peeref
Peeref publishes scientific posters from all research disciplines. Our Diamond Open Access policy means free access to content and no publication fees for authors.
Learn MoreAdd your recorded webinar
Do you already have a recorded webinar? Grow your audience and get more views by easily listing your recording on Peeref.
Upload Now