Researchers develop hearing aid algorithm that enables hearing impaired to understand spoken words 90% of time

Briefing

Researchers develop hearing aid algorithm that enables hearing impaired to understand spoken words 90% of time

March 7, 2017

Briefing

  • Speech and Noise Separating Algorithm – Researchers from Ohio State University applied machine learning to segregate sounds in hearing aids, enabling more accurate speech recognition from background noise, sound amplification, and volume adjustment
  • Improved Hearing – Improved ability to understand spoken words amid babble noise among hearing impaired from 10% to 90%, and for those with normal hearing from 42% to 78%
  • Outperforms Abled Hearers – Hearing impaired understood almost 20% more words than those with normal hearing who did not use the program
  • Applications – Include better hearing aids, improved speech recognition programs, and other industrial and military applications
  • Global Hearing Aids Market – Forecasted to grow 6.3% annually between 2015 and 2020, from $6.2 billion to $8.4 billion, according to research firm Markets and Markets

Accelerator

Sector

Healthcare/Health Sciences, Information Technology

Source

Original Publication Date

December 6, 2016

Leave a comment