A Cognitive HFCC-IDBM Framework for Speech Emotion Recognition

  • J. Umamaheswari, Dr.A.Akila

Abstract

Speech emotion recognition from the signal is the most important but challenging task of human-machine interaction. Because of the absence of discriminative acoustic attributes,traditional acoustic features could not perform well. Numerous methodologies are used to recognize the speaker’s emotion from the speech. This novel type designed based on theenhancement of emotions related to a speech recognition system using an Improved Deep Boltzmann Machine (IDBM) algorithm. Emotions are generally classified as six types like fear, neutral, sadness, anger, happiness, and surprise. With existing systems, the types of speech emotions and their accuracy are studied. Emotional speech samples of numbers are the database used in this system.MMSE filter is used with the Human Factor Cepstral Coefficients (HFCC) for the noise reduction process and the extraction of features respectively. To this end, an improved DBM is used to estimate results precision. The results achieved are compared to traditional acoustic systems and it is observed that this systems provides improved competence.

Published
2020-04-30
How to Cite
J. Umamaheswari, Dr.A.Akila. (2020). A Cognitive HFCC-IDBM Framework for Speech Emotion Recognition. International Journal of Advanced Science and Technology, 29(7), 9014 - 9023. Retrieved from http://sersc.org/journals/index.php/IJAST/article/view/25636
Section
Articles