Computer Can Recognize Emotions in Speech
News Nov 09, 2017 | Original story from the Higher School of Economics
Spectrograms of the phrase 'Kids are talking by the door' pronounced with different emotions. Credit: The Higher School of Economics
Experts of the Faculty of Informatics, Mathematics, and Computer Science at the Higher School of Economics have created an automatic system capable of identifying emotions in the sound of a voice. Their report was presented at a major international conference - Neuroinformatics-2017.
For a long time, computers have successfully converted speech into text. However, the emotional component, which is important for conveying meaning, has been neglected. For example, for the same question 'Is everything okay?', people can answer 'Of course it is!' with different intonations: calm, provoking, cheerful, etc. And the reactions will be completely different.
Neural networks are processors connected with each other and capable of learning, analysis and synthesis. This smart system surpasses traditional algorithms in that the interaction between a person and computer becomes more interactive.
HSE researchers Anastasia Popova, Alexander Rassadin and Alexander Ponomarenko have trained a neural network to recognize eight different emotions: neutral, calm, happy, sad, angry, scared, disgusted, and surprised. In 70% of cases the computer identified the emotion correctly, say the researchers.
The researchers have transformed the sound into images - spectrograms - which allowed them to work with sound using the methods applied for image recognition. A deep learning convolutional neural network with VGG-16 architecture was used in the research.
The researchers note that the programme successfully distinguishes neutral and calm tones, while happiness and surprise are not always recognized well. Happiness is often perceived as fear and sadness, and surprise is interpreted as disgust.
This article has been republished from materials provided by the Higher School of Economics. Note: material may have been edited for length and content. For further information, please contact the cited source.