Share
As humans, we primarily communicate vocally and aurally. We convey not just linguistic information, but also the complexities of our emotional states and personalities. Aspects of our voice such as tone, rhythm, and pitch are vital to the way we are perceived. In other words, the way we say things matters.
With advances in technology and the introduction of social robots, conversational agents, and voice assistants into our lives, we are expanding our interactions to include computer agents, interfaces, and environments. Research on these technologies can be found across the fields of human-agent interaction (HAI), human-robot interaction (HRI), human-computer interaction (HCI), and human-machine communication (HMC), depending on the kind of technology under study. Many studies have analyzed the impact of computer voice on user perception and interaction. However, these studies are spread across different types of technologies and user groups and focus on different aspects of
Published: June 9, 2021
In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.
As humans, we primarily communicate vocally and aurally. We convey not just linguistic information, but also the complexities of our emotional states and personalities. Aspects of our voice such as tone, rhythm, and pitch are vital to the way we are perceived. In other words, the way we say things matters.
Date Time
Talking Points: Role of Computer Voice in Future of Speech-Based Human-Computer Interaction
In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.
As humans, we primarily communicate vocally and aurally. We convey not just linguistic information, but also the complexities of our emotional states and personalities. Aspects of our voice such as tone, rhythm, and pitch are vital to the way we are perceived. In other words, the way we say things matters.
In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.