Emotion Detection using Facial Expression and Speech Recognition

  • Imtiyaz Ahmad et al.


Emotion interpretation has arisen as an essential field of research that can provide some useful insight
to a number of ends. People communicate their feelings through their words and facial gestures,
consciously or implicitly. To interpret emotions may be used several different types of knowledge, such
as voice, writing, and visual. Speech and facial expression have been the valuable tool for identifying
feelings since ancient times, and have revealed numerous facets, including mentality. It is an enormous
and difficult job to determine the feelings beneath these statements and facial expressions. Scientists
from multiple disciplines are seeking to find an effective way to identify human emotions more
effectively from different outlets, like voice and facial expressions, to tackle this issue. Computer
intelligence, natural language modeling systems, etc., have been used to gain greater precision in this
responsiveness towards various speeches and vocal-based strategies. Analysis of the feelings may be
effective in several specific contexts. One such area is cooperation with the human computers.
Computers can make smarter choices and aid consumers with emotion recognition, and can also aid
render human-robot experiences more realistic. We would explore current emotion recognition
methods, emotion modeling, emotion databases, their features, drawbacks and some potential future
directions in this study. We concentrate on evaluating work activities focused on voice and facial
recognition to evaluate emotions. We studied different technical sets that were included in current
methodologies and technologies. The essential accomplishments in the sector are completed and
potential strategies for improved result are highlighted[1].