MIT researchers developed a machine-learning technique that captures and models the underlying acoustics of a scene from a limited number of sound recordings. The system can accurately simulate what any sound, like a song, would sound like if a person were to walk around to different locations in a scene.
A new technique enables on-device training of machine-learning models on edge devices like microcontrollers, which have very limited memory. This could allow edge devices to continually learn from new data, eliminating data privacy issues, while enabling user customization.
A new two-stage learning system could enable robots to learn abstract ideas about how and when to execute a skill during a task, like using a rolling pin while making pizza.