Revolutionizing American Sign Language Recognition: Deeparslr's Signer-Independent Deep Learning Framework for Isolated Gestures

  • Abhishek Jain

Abstract

The vast spectrum of hand gesture recognition's applications in robotics, gaming, virtual reality, sign language, and human-computer interface has garnered a lot of academic interest. The most efficient form of communication for those who have hearing loss is sign language, an organised system of hand gestures. However, there are three major obstacles to the creation of an effective sign language recognition system: hand segmentation, representation of hand form features, and gesture sequence recognition, particularly for dynamic solo motions. The incorporation of color-based hand segmentation algorithms, manual feature extraction for the representation of hand forms, and Hidden Markov Models (HMM) for sequence identification are the traditional methods for recognising sign language. This study introduces a novel framework for signer-independent sign language recognition that integrates a number of deep learning architectures, including hand semantic segmentation, hand form feature representation, and deep recurrent neural networks. The system extracts hand areas from each frame of the input video using the recently created semantic segmentation method DeepLabv3+. The retrieved hand portions are afterwards cropped and shrunk to a set size to lessen variances in hand scale. To learn how to extract hand form data, a single-layer Convolutional Self-Organizing Map (CSOM) is used rather than transfer learning from pre-trained deep convolutional neural networks. Deep Bi-directional Long Short-Term Memory (BiLSTM) recurrent neural networks are then used to recognise the extracted feature vectors in chronological order. Three BiLSTM layers, one fully linked layer, and two softmax layers make up the BiLSTM network. A demanding database with 23 isolated Arabic sign language phrases recorded from three different users is used to assess the suggested technique. Results from experiments show that the proposed framework works much better than the most current methods for signer-independent testing.

Published
2018-12-31
How to Cite
Jain, A. (2018). Revolutionizing American Sign Language Recognition: Deeparslr’s Signer-Independent Deep Learning Framework for Isolated Gestures. International Journal of Control and Automation, 11(3), 69 - 76. https://doi.org/10.52783/ijca.v11i3.38193
Section
Articles