Identifying Human Emotions from Speech Using Convolution Neural Networks
Identifying the emotion of a person from his speech is easy for human beings but making a computer to do it automatically is a difficult task. Researchers are trying to do it in two different ways. In the first approach, emotion can be identified based on voice, pitch, and modulation of the speech and in the second approach, it can be done based on the words that are used in the speech. As it has many applications in human computer interaction, a lot of research is going on for a long time but still trials are made to improve the accuracy. This paper focuses on identifying speech emotion recognition from speech signal using deep learning convolution neural networks. The experiments resulted with correct identification of the feelings expressed by the speech signal with a better accuracy compared to existing research with convolution neural nets.