Sign Language Interpretation With Machine Learning In Movement Recognition For Speech – Disabled People
Speech and auditory impairments in young children are a huge obstacle for their growth into social adults. Sign language is the language that is used by speech and auditory – disabled people to communicate with public. The proposed device combines the technologies of machine learning, IoT and movement recognition using accelerometers to interpret the signs made by the device user. The accelerometers are attached to the human hand on top of the gloves worn by the user and it detects the bend and movement of the fingers. The bend combinations received are mapped to different signs corresponding to verbal meanings. The meaning of the sign is given out as voice output through a speaker. Main objective of the system is to enable a speech and auditory disabled person to make a complete and uninterrupted conversion with a person who does not the sign language.