Article
Computer Science, Information Systems
Haritha V. Das, Kavya Mohan, Linta Paul, Sneha Kumaresan, Chitra S. Nair
Summary: One of the major challenges faced by medical practitioners is to communicate effectively with speech impaired people, especially during online consultations. The existing models for recognizing hand gestures in sign language are not very effective in real-life situations. This article proposes a model that can recognize patients' hand gestures in Indian Sign Language and translate them into corresponding words, enabling clear communication and understanding of medical symptoms.
MULTIMEDIA TOOLS AND APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Sakshi Sharma, Sukhwinder Singh
Summary: Hand gestures are crucial for communication and form the foundation of sign language, which is a visual form of communication. A deep learning CNN model designed for recognizing gesture-based sign language achieved high classification accuracy with fewer model parameters. The proposed model outperformed existing techniques in classifying gestures accurately with minimal error rates.
EXPERT SYSTEMS WITH APPLICATIONS
(2021)
Article
Telecommunications
Sakshi Sharma, Sukhwinder Singh
Summary: An efficient sign language recognition system based on deep learning technique was proposed in this study, with main contributions including creating a large dataset of Indian sign language, increasing intra-class variance in the dataset, and using Convolutional Neural Network for feature extraction and classification. Experimental results demonstrated the method's promising performance in terms of accuracy and efficiency.
WIRELESS PERSONAL COMMUNICATIONS
(2022)
Article
Computer Science, Information Systems
E. Rajalakshmi, R. Elakkiya, V. Subramaniyaswamy, L. Prikhodko Alexey, Grif Mikhail, Maxim Bakaev, Ketan Kotecha, Lubna Abdelkareim Gabralla, Ajith Abraham
Summary: A novel vison-based hybrid deep neural net methodology is proposed in this study for recognizing Indian and Russian sign gestures. The proposed framework aims to establish a single framework for tracking and extracting multi-semantic properties, such as non-manual components and manual co-articulations. By using a 3D deep neural net with atrous convolutions for spatial feature extraction, attention-based Bi-LSTM for temporal and sequential feature extraction, modified autoencoders for abstract feature extraction, and a hybrid attention module for discriminative feature extraction, the proposed sign language recognition framework yields better results than other state-of-the-art frameworks.
Article
Computer Science, Information Systems
Hao Zhou, Wengang Zhou, Yun Zhou, Houqiang Li
Summary: The research proposes a spatial-temporal multi-cue (STMC) network for video-based sign language understanding, with a spatial multi-cue (SMC) module and a temporal multi-cue (TMC) module. A joint optimization strategy and segmented attention mechanism are designed to make the best of multi-cue sources for sign language recognition and translation, achieving new state-of-the-art performance on three sign language benchmarks.
IEEE TRANSACTIONS ON MULTIMEDIA
(2022)
Article
Computer Science, Artificial Intelligence
Giulia Zanon de Castro, Rubia Reis Guerra, Frederico Gadelha Guimaraes
Summary: Sign languages are crucial for the cognitive and social development of deaf individuals, but communication barriers due to hearing loss have a significant social impact. Bi-directional sign language translation can bridge this communication gap. In this study, a multi-stream deep learning model was developed to recognize signs in Brazilian, Indian, and Korean Sign Languages. The use of multi-stream networks and artificially generated depth maps achieved high accuracy in sign recognition.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Multidisciplinary Sciences
Refat Khan Pathan, Munmun Biswas, Suraiya Yasmin, Mayeen Uddin Khandaker, Mohammad Salman, Ahmed A. F. Youssef
Summary: Sign Language Recognition is a breakthrough for communication among the deaf-mute society, and this study proposes a cost-effective technique using image dataset to detect American Sign Language (ASL) with high accuracy by utilizing a convolutional neural network model and image processing methods.
SCIENTIFIC REPORTS
(2023)
Article
Computer Science, Information Systems
Soumen Das, Saroj Kr Biswas, Biswajit Purkayastha
Summary: The deaf community faces challenges in communication with the hearing community. Traditional methods like employing sign language interpreters are not efficient and cost-effective. This paper proposes an automated sign language recognition system (AISLRSEW) that combines CNN and local handcrafted features to improve recognition accuracy, specifically in emergency situations with ISL words. Through evaluation and comparison, the proposed model achieves an average accuracy of 94.42%, outperforming existing models.
MULTIMEDIA TOOLS AND APPLICATIONS
(2023)
Article
Multidisciplinary Sciences
Shiqi Wang, Kankan Wang, Tingping Yang, Yiming Li, Di Fan
Summary: This paper proposes an improved 3D-ResNet sign language recognition algorithm with enhanced hand features, aiming to improve the accuracy of sign language recognition. The algorithm detects the hand regions using the improved EfficientDet network and enhances the detection ability with dual channel and spatial attention modules. Additionally, an improved residual module is used to extract sign language features. Experimental results show that the proposed algorithm achieves higher recognition accuracy compared to other algorithms.
SCIENTIFIC REPORTS
(2022)
Article
Chemistry, Analytical
Yuejiao Wang, Zhanjun Hao, Xiaochao Dang, Zhenyi Zhang, Mengqiao Li
Summary: With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. This paper proposes a contactless gesture and sign language behavior sensing method based on ultrasonic signals (UltrasonicGS). The method utilizes data augmentation techniques and the Connectionist Temporal Classification (CTC) algorithm to improve the performance of the behavior recognition model and achieve better recognition of sign language behaviors.
Article
Computer Science, Information Systems
Amandeep Singh Dhanjal, Williamjeet Singh
Summary: Sign language is the most suitable communication medium for hearing impaired individuals, but there are still communication gaps with hearing people. This research aims to bridge the gap by developing an automatic system that translates speech to Indian Sign Language, with successful results in training models and evaluating system modules. Future directions include enhancing the system with non-manual SL features and sentence-level translation for improved usability testing and communication for the hearing impaired.
MULTIMEDIA TOOLS AND APPLICATIONS
(2022)
Article
Computer Science, Hardware & Architecture
Rinki Gupta, Arun Kumar
Summary: Sign language recognition is often performed using hierarchical classification approach in order to reduce complexity and improve accuracy. This paper introduces a multi-label classification method for categorizing signs based on their lexical attributes, resulting in a lower classification error rate compared to traditional tree-based methods. The integrated processing of signals from both hands to determine static or dynamic states, as well as utilizing symmetry for sign categorization, are key features of this novel approach.
COMPUTERS & ELECTRICAL ENGINEERING
(2021)
Article
Chemistry, Analytical
Jesus Galvan-Ruiz, Carlos M. Travieso-Gonzalez, Alejandro Pinan-Roescher, Jesus B. Alonso-Hernandez
Summary: According to WHO, a significant percentage of the global population faces difficulty in oral communication due to hearing disorders. This article discusses the importance of developing tools to aid in daily communication for these individuals. The research focuses on transcribing Spanish Sign Language (SSL) using a Leap Motion volumetric sensor capable of recognizing hand movements in 3D. By collaborating with a hearing-impaired subject and recording 176 dynamic words, the research achieves an accuracy of 95.17% in predicting input through the use of Dynamic Time Warping (DTW).
Article
Chemistry, Analytical
Zhenxing Zhou, Vincent W. L. Tam, Edmund Y. Lam
Summary: Continuous sign language recognition using different types of sensors is a challenging research direction. Vision-based methods have computation-intensive algorithms and translation delays, while gesture-based methods using wearable devices provide instant translation but limited information. To address this, a BLSTM-based multi-feature framework using smart watches is proposed. Experimental results show lower word error rate compared to existing approaches. Additionally, a portable sign language collection and translation platform is proposed.
Article
Computer Science, Artificial Intelligence
Zhengzhe Liu, Lei Pang, Xiaojuan Qi
Summary: This study introduces a mutual enhancement network (MEN) for joint sign language recognition and education, which formulates the sign language recognition system and the sign language education system as an estimation-maximization (EM) framework to boost performance. Experimental results validate the superiority of the proposed framework.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)