Noisy environments make conversation tiring and are especially hard for people with hearing loss. To help, a team developed a prototype called proactive hearing assistants that uses two AI models to follow turn‑taking rhythm and isolate the voices of conversation partners. One model performs a “who spoke when” analysis and detects low overlap in exchanges; the second mutes voices that do not follow the conversation pattern and other background noise. The system can identify a partner from two to four seconds of audio and runs on off‑the‑shelf headphones and microphones.
The researchers presented the work in Suzhou at the Conference on Empirical Methods in Natural Language Processing and released the underlying code as open‑source. Senior author Shyam Gollakota, a professor at the Paul G. Allen School, noted that unlike some previous approaches that rely on implanted electrodes, this method needs only audio. Lead author Guilin Hu said the system is proactive and infers listener intent automatically, rather than requiring the user to select a speaker or listening distance.
The team tested the prototype with 11 participants; filtered audio was rated more than twice as favorably as the baseline. Tests used English, Mandarin and Japanese dialogs, and the researchers caution that rhythms in other languages may need tuning. The current hardware uses commercial over‑the‑ear parts and circuitry, but the team hopes to shrink the system into tiny chips for earbuds or hearing aids. Funding came from the Moore Inventor Fellows program, and concurrent work shown at MobiCom 2025 indicates similar models can run on very small hearing‑aid devices.
Difficult words
- proactive — acting before a problem happens
- rhythm — regular pattern of sounds or speech
- isolate — separate one sound from other sounds
- overlap — when two sounds or speech happen together
- baseline — standard or comparison used to measure change
- infer — decide or conclude from available informationinfers
- prototype — first or early model used for testing
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- What advantages and disadvantages do hearing assistants that need only audio have compared with implanted devices?
- How might differences in language rhythm affect the system's performance and what could researchers do about it?
- What practical challenges should designers consider when making this system smaller for earbuds or hearing aids?
Related articles
New AI tools for tuberculosis shown at lung health conference
Researchers presented four new AI approaches for detecting and monitoring TB at the Union World Conference on Lung Health in Copenhagen (18–21 November). The tools include breath analysis, cough screening, vulnerability mapping and a chest X‑ray tool for children.
AI to stop tobacco targeting young people
At a World Conference in Dublin (23–25 June), experts said artificial intelligence can help stop tobacco companies targeting young people online. They warned social media and new nicotine products draw youth into addiction, and poorer countries carry the heaviest burden.