Sign Language Translator
Communication gap between people with hearing disabilities and normal people is a challenge to our society and is yet to be completely solved.In this paper, we present Sign Language Translator which is an end-to-end system aimed to solve the above problem. The system takes video input from the user and returns the translation of each sign in the English language back to the user. We train the system on American Sign Language(ASL) dataset having 29 classes and use Convolutional Neural Networks(CNN) as the central architecture. The system is divided into 3 parts which are the Video Stream Input System(VSIS), Hand Segmentation System(HSS) and the Sign Language Classification System(SLCS). We train the System to take video input using a web camera and process the video one frame at a time, which is then sent to HSS for detecting hands in the frame and finally to SLCS for classifying the gesture represented by the detected hand.