Japan Advanced Institute of Science and Technology
Human beings have the ability to recognize emotions in others, but the same cannot be said for robots. Although perfectly capable of communicating with humans through speech, robots and virtual agents are only good at processing logical instructions, which greatly restricts human-robot interaction (HRI). Consequently, a great deal of research in HRI is about emotion recognition from speech. But first, how do we describe emotions?
Categorical emotions such as happiness, sadness, and anger are well-understood by us but can be hard for robots to register. Researchers have focused on “dimensional emotions,” which constitute a gradual emotional transition in natural speech. “Continuous dimensional emotion can help a robot capture the time dynamics of a speaker’s emotional state and accordingly adjust its manner of interaction and content in real time,” explains Prof. Masashi Unoki from Japan Advanced Institute of Science and T
E-Mail
IMAGE: A parallel LSTM network takes in MMCG features with different resolutions and yields outputs that are concatenated together and then sent to a merging LSTM layer and a dense layer. view more
Credit: Masashi Unoki
Ishikawa, Japan - Human beings have the ability to recognize emotions in others, but the same cannot be said for robots. Although perfectly capable of communicating with humans through speech, robots and virtual agents are only good at processing logical instructions, which greatly restricts human-robot interaction (HRI). Consequently, a great deal of research in HRI is about emotion recognition from speech. But first, how do we describe emotions?