A Comparison of Deep Learning Models for Sign Language Recognition
Communication is essential for survival in society. There is a communication issue between the deaf, dumb community. The sign language is only the means of communication between the peoples of this community. Developing an automated model for sign language recognition for these people is very challenging. Sign language recognition is one of the approaches to tear down this communication barrier. In this paper, a comparative study of diverse convolution neural network models for analyzing different gestures of sign language. Different CNN models are devised and analyzed with sign language data to achieve better performance in recognition. A quantitative experiment has been done on the benchmarks ASL alphabet dataset.