Microsoft is adding new machine learning and artificial intelligence-based features to Microsoft Teams to help improve the sound quality of meetings and calls to help reduce echo, allow users to speak and listen at the same time and reduce reverberation for users in rooms with poor acoustics.
These new improvements expand on the company’s AI-enabled background noise reduction features in Teams, which it says have been valuable in helping organizations conduct remote and hybrid meetings.
To help reduce echo that occurs when audio from the meeting is picked up by a user’s microphone, Microsoft used data from thousands of devices to create a large dataset with approximately 30,000 hours of clean speech for training a model that can deal with extreme audio conditions in real time.
Microsoft says it accounted for both noise suppression and echo cancellation by combining the two separate models using joint training.
“Now our all-in-one model runs 10% faster than “noise-suppression-only” without quality trade-offs,” Microsoft says in a blog.
To address reverberation, Microsoft enabled the machine learning model to convert any captured audio signal to sound similar to “speaking into a close-range microphone.” This is especially useful in rooms with poor acoustics, such as a large room or stairwell.
Echo cancellation and dereverberation are rolling out for Windows and Mac devices, and will be coming soon to mobile platforms, the company says.
In addition, the new intelligent features allow users to talk over one another more clearly by creating a “full duplex” sound that allows for interruptions that make the conversation seem more natural and less choppy.