Meta (formerly Facebook) has built three new artificial intelligence (AI) models designed to make sound more realistic in mixed and virtual reality experiences.
The three AL models Visual-Acoustic Matching, Visually-Informed Dereverberation and VisualVoice focus on human speech and sounds in
Meta is collaborating with researchers from UT Austin to develop a trio of open source audio “understanding tasks” that will help developers build more immersive AR and VR experiences with more lifelike audio..