Sign Language Interpretation With Machine Learning In Movement Recognition For Speech – Disabled People

  • N.Ananthi, Megna N, Priyanga S, Santhosh Kumar N K.Kumaran,

Abstract

Speech and auditory impairments in young children are a huge obstacle for their growth into social adults. Sign language is the language that is used by speech and auditory – disabled people to communicate with public. The proposed device combines the technologies of machine learning, IoT and movement recognition using accelerometers to interpret the signs made by the device user. The accelerometers are attached to the human hand on top of the gloves worn by the user and it detects the bend and movement of the fingers. The bend combinations received are mapped to different signs corresponding to verbal meanings. The meaning of the sign is given out as voice output through a speaker.  Main objective of the system is to enable a speech and auditory disabled person to make a complete and uninterrupted conversion with a person who does not the sign language.

Published
2020-05-15
How to Cite
N.Ananthi, Megna N, Priyanga S, Santhosh Kumar N K.Kumaran,. (2020). Sign Language Interpretation With Machine Learning In Movement Recognition For Speech – Disabled People. International Journal of Advanced Science and Technology, 29(9s), 3034 - 3041. Retrieved from http://sersc.org/journals/index.php/IJAST/article/view/15709