Article
Multidisciplinary Sciences
Reem Aljuhani, Aseel Alfaidi, Bushra Alshehri, Hajer Alwadei, Eman Aldhahri, Nahla Aljojo
Summary: In this study, a convolutional neural network model is proposed to recognize Arabic alphabet signs in sign language. The experimental results show a recognition accuracy of 94.46%, outperforming previous studies in terms of recognition accuracy.
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING
(2023)
Article
Chemistry, Physical
Yuxuan Liu, Xijun Jiang, Xingge Yu, Huaidong Ye, Chao Ma, Wanyi Wang, Youfan Hu
Summary: Sign language recognition is important for connecting the hearing/speech impaired and non-sign language communities. A wearable system using a convolutional neural network and sensors was proposed to recognize hand gestures and movement trajectories. High accuracy in both isolated sign language word and sentence recognition was achieved.
Article
Computer Science, Artificial Intelligence
Ashish Sharma, Nikita Sharma, Yatharth Saxena, Anuraj Singh, Debanjan Sadhya
Summary: This paper discusses a comparative analysis of gesture recognition techniques, testing three models on an Indian Sign Language dataset. The hierarchical model outperformed the other two, achieving 98.52% accuracy for one-hand gestures and 97% for two-hand gestures.
NEURAL COMPUTING & APPLICATIONS
(2021)
Article
Chemistry, Analytical
Jesus Galvan-Ruiz, Carlos M. Travieso-Gonzalez, Alejandro Pinan-Roescher, Jesus B. Alonso-Hernandez
Summary: According to WHO, a significant percentage of the global population faces difficulty in oral communication due to hearing disorders. This article discusses the importance of developing tools to aid in daily communication for these individuals. The research focuses on transcribing Spanish Sign Language (SSL) using a Leap Motion volumetric sensor capable of recognizing hand movements in 3D. By collaborating with a hearing-impaired subject and recording 176 dynamic words, the research achieves an accuracy of 95.17% in predicting input through the use of Dynamic Time Warping (DTW).
Article
Chemistry, Multidisciplinary
Jungpil Shin, Abu Saleh Musa Miah, Md. Al Mehedi Hasan, Koki Hirooka, Kota Suzuki, Hyoun-Sup Lee, Si-Woong Jang
Summary: Sign language recognition is a crucial application in hand gesture recognition and computer vision research. However, there is limited research on Korean sign language classification due to the challenges of light illumination and background complexity. To overcome these challenges, a convolution and transformer-based multi-branch network is proposed, achieving higher performance by combining long-range dependency computation of the transformer and local feature calculation of the CNN.
APPLIED SCIENCES-BASEL
(2023)
Article
Engineering, Biomedical
Qi Guo, Shujun Zhang, Liwei Tan, Ke Fang, Yinghao Du
Summary: This study proposes a continuous sign language recognition method based on interactive attention and improved graph convolutional networks. The method enhances the spatial-temporal correlation mining ability and recognition performance of the network by utilizing the interaction between skeleton data and RGB data through an interactive attention mechanism.
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
(2023)
Article
Computer Science, Information Systems
Haritha V. Das, Kavya Mohan, Linta Paul, Sneha Kumaresan, Chitra S. Nair
Summary: One of the major challenges faced by medical practitioners is to communicate effectively with speech impaired people, especially during online consultations. The existing models for recognizing hand gestures in sign language are not very effective in real-life situations. This article proposes a model that can recognize patients' hand gestures in Indian Sign Language and translate them into corresponding words, enabling clear communication and understanding of medical symptoms.
MULTIMEDIA TOOLS AND APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Gul Varol, Liliane Momeni, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman
Summary: The focus of this work is sign spotting, which aims to identify whether and where a sign has been signed in a continuous, co-articulated sign language video. This is achieved by training a model using various types of available supervision, such as watching existing footage, reading associated subtitles, and looking up words in visual sign language dictionaries. The effectiveness of the approach is validated on low-shot sign spotting benchmarks. Additionally, a machine-readable British Sign Language (BSL) dictionary dataset called BslDict is provided to facilitate further study of this task.
INTERNATIONAL JOURNAL OF COMPUTER VISION
(2022)
Article
Computer Science, Artificial Intelligence
Fei Wang, Yuxuan Du, Guorui Wang, Zhen Zeng, Lihong Zhao
Summary: Existing sign language recognition methods have made progress but still have issues. We propose a new method that achieves higher accuracy at a faster speed. Additionally, we create a large-scale Chinese sign language video dataset to address the limitations of existing datasets.
NEURAL COMPUTING & APPLICATIONS
(2022)
Article
Computer Science, Artificial Intelligence
Yunus Can Bilge, Ramazan Gokberk Cinbis, Nazli Ikizler-Cinbis
Summary: This paper addresses the problem of zero-shot sign language recognition (ZSSLR), aiming to recognize instances of unseen sign classes by leveraging models learned over the seen sign classes. Textual sign descriptions and attributes from sign language dictionaries are used as semantic class representations for knowledge transfer. Three benchmark datasets are introduced to analyze the problem in detail. The proposed approach builds spatiotemporal models of body and hand regions, and shows that textual and attribute based class definitions are effective for recognizing previously unseen sign classes within a zero-shot learning framework. Techniques to analyze the influence of binary attributes in zero-shot predictions are also introduced. The introduced approaches and datasets are expected to facilitate further exploration of zero-shot learning in sign language recognition.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(2023)
Article
Chemistry, Analytical
Muneer Al-Hammadi, Mohamed A. Bencherif, Mansour Alsulaiman, Ghulam Muhammad, Mohamed Amine Mekhtiche, Wadood Abdul, Yousef A. Alohali, Tareq S. Alrayes, Hassan Mathkour, Mohammed Faisal, Mohammed Algabri, Hamdi Altaheri, Taha Alfakih, Hamid Ghaleb
Summary: This study presents an efficient architecture for sign language recognition based on a convolutional graph neural network. The proposed architecture enhances the spatial context representation of gestures through a spatial attention mechanism and achieves outstanding results on various datasets.
Article
Computer Science, Artificial Intelligence
Xiangwei Zheng, Xiaomei Yu, Yongqiang Yin, Tiantian Li, Xiaoyan Yan
Summary: The paper proposes an emotion recognition method based on three-dimensional feature maps and CNNs, which improves the accuracy of emotion recognition through steps such as calibration, segmentation, feature extraction, and CNN design. Experimental results demonstrate that the proposed method has better classification accuracy than state-of-the-art methods.
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
(2021)
Article
Computer Science, Artificial Intelligence
Ragib Amin Nihal, Sejuti Rahman, Nawara Mahmood Broti, Shamim Ahmed Deowan
Summary: This paper proposes two approaches based on conventional transfer learning and contemporary Zero-shot learning for automatic BdSL alphabet recognition of both seen and unseen data. The models achieved satisfactory results on a large dataset, demonstrating their potential to serve extensively for the hearing and speaking impaired community. These approaches provide a promising solution to the challenges faced in automatic recognition of BdSL.
PATTERN RECOGNITION LETTERS
(2021)
Article
Computer Science, Information Systems
Samiya Kabir Youme, Towsif Alam Chowdhury, Hossain Ahamed, Md Sayeed Abid, Labib Chowdhury, Nabeel Mohammed
Summary: Sign Language serves as a crucial means of communication for the hearing impaired, yet effective communication with the masses remains a challenge. While significant research has been done in foreign language datasets, particularly in languages like English and French, there is a lack of substantial work in Bangla Sign Language. Studies predominantly conducted on small datasets have shown satisfactory performance but have failed to generalize well, especially when utilizing deep learning solutions. This highlights the importance of inter-dataset evaluation and the need for standardized datasets to enhance generalization in real-life applications.
Article
Computer Science, Artificial Intelligence
Sunanda Das, Md. Samir Imtiaz, Nieb Hasan Neom, Nazmul Siddique, Hui Wang
Summary: Sign language serves as a comprehensive medium of communication for individuals with hearing and speaking impairments. This paper presents a hybrid model and a background elimination algorithm for the automatic recognition of Bangla Sign Language. The proposed system achieves high accuracy and precision in character and digit recognition.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Computer Science, Information Systems
M. Suneetha, M. V. D. Prasad, P. V. V. Kishore
Summary: The study explores video-based sign language recognition using deep learning models, introducing a multi-stream CNN combined with multi-view attention mechanism to address view invariance and achieve improved recognition accuracy.
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
(2021)
Article
Computer Science, Information Systems
E. Kiran Kumar, P. V. V. Kishore, D. Anil Kumar, M. Teja Kiran Kumar
Summary: The study proposes using 3D motion capture technology and graph matching algorithm for machine translation of sign language, addressing two key issues in sign recognition and introducing a two-phase solution.
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES
(2021)
Article
Computer Science, Information Systems
D. Anil Kumar, A. S. C. S. Sastry, P. V. V. Kishore, E. Kiran Kumar
Summary: 3D sign language recognition is challenging due to the complex spatio-temporal variations of hands and fingers. A twin motion algorithm is proposed to address the variable motion joints, resulting in a method that is signer invariant, motion invariant, and faster compared to state-of-the-art graph kernel methods.
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES
(2022)
Article
Computer Science, Information Systems
M. Suneetha, M. V. D. Prasad, P. V. V. Kishore
Summary: This study introduces a novel model for building a view sensitive environment in multi-view sign language recognition, utilizing metric learning to extract features from multiple views and demonstrating higher accuracy in experiments.
MULTIMEDIA TOOLS AND APPLICATIONS
(2022)
Article
Computer Science, Artificial Intelligence
Suneetha Mopidevi, M. V. D. Prasad, Polurie Venkata Vijay Kishore
Summary: In this paper, a multiview meta-metric learning model is proposed for video-based sign language recognition. Unlike traditional metric learning, this approach is based on set-based distances and utilizes meta-cells and task-based learning. The proposed model also introduces a maximum view pooled distance for binding intra-class views. Experimental results demonstrate that the multiview meta-metric learning model achieves higher accuracies than the baselines on multiview sign language and human action recognition datasets.
PATTERN ANALYSIS AND APPLICATIONS
(2023)
Review
Computer Science, Theory & Methods
Kumari Kavitha, E. Kiran Kumar
Summary: This paper discusses the methods of early identification and segmentation of brain tumors using deep learning techniques, and provides new research and clinical solutions.
INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS
(2022)
Article
Computer Science, Theory & Methods
Sk Ashraf Ali, M. V. D. Prasad, P. Praveen Kumar, P. V. V. Kishore
Summary: The primary objective of this work is to build a competitive global view from multiple views within a class label. This involves extracting spatio temporal features from videos of skeletal sign language using a 3D convolutional neural network, and ensembling them into a low dimensional subspace. The constructed global view is then utilized as training data for sign language recognition.
INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS
(2022)
Article
Computer Science, Information Systems
M. Teja Kiran Kumar, P. V. V. Kishore, B. T. P. Madhav, D. Anil Kumar, N. Sasi Kala, K. Praveen Kumar Rao, B. Prasad
Summary: The research explores the impact of multiple random skeletal joint ordered features on deep learning systems, proposing a novel idea of learning skeletal joint volumetric features on a spectrally graded CNN. The study demonstrates that joint order independent feature learning is achievable on CNNs trained on quantified spatio temporal feature maps extracted from randomly shuffled skeletal joints.