SBIR-STTR Award

Sirce: a Sensor Image Based Room-Centered Equalization System for Hearing Aids
Award last edited on: 1/6/2017

Sponsored Program
SBIR
Awarding Agency
NIH : NIDCD
Total Award Amount
$198,321
Award Phase
1
Solicitation Topic Code
-----

Principal Investigator
Richard S Goldhor

Company Information

Speech Technology & Applied Research Corp (AKA: STAR Analytical Services)

54 Middlesex Turnpike
Bedford, MA 01730
Location: Single
Congr. District: 06
County: Middlesex

Phase I

Contract Number: ----------
Start Date: ----    Completed: ----
Phase I year
2016
Phase I Amount
$198,321
Reverberant spaces create major problems for hearing impaired listeners. Reverberation reduces the sound quality and intelligibility of speech for such listeners, especially if they are hearing aid (HA) users. Moreover, reverberation reduces the effectiveness of many otherwise useful signal processing methods, such as speech enhancement algorithms, because reverberation introduces additional virtual sources and background noise that increase the complexity of the acoustic signals such algorithms must grapple with. We propose a novel method called SIRCE that “equalizes” (that is, dereverberates) speech and other audio signals in complex real-world acoustic environments containing multiple unknown acoustic sources. SIRCE has been designed for compatibility with, and integration into, NIH’s Open Speech Platform initiative. SIRCE’s design comprises three critical innovative features: it is room-centric, sensor image based, and listener aware. Room-centric means that important components of the system are embedded in the acoustic space itself, rather than residing in the user’s ear (the hearing aid). There are many advantages to placing sensors (microphones) and processing components in rooms rather than ears: reduced cost, increased processing power, relaxed form factor constraints, practical deployment of more than two microphones, and easy sharing of processing power and computational results between users. A room-centric design makes particular sense for an equalization system, because reverberation itself is room-specific. Sensor image based means that SIRCE calculates the acoustic image of each active source in each sensor. Sensor image extraction (“SIX”) is our innovative contribution to the active field of blind source separation (“BSS”). Sensor image extraction determines what the response of each microphone would be to each source in isolation even when multiple sources are always active simultaneously. SIX computes multiple independent images of each acoustic source (one for each microphone), whereas typical BSS algorithms only generate a single estimate for each source. This is important because the most effective dereverberation methods are multi-channel algorithms that require solo or source-separated inputs from multiple microphones. Listener aware means that our system employs the signals from the listener’s HA-internal microphones, a listener-specific acuity profile, and the listener-specified source of interest (“target”) to determine whether that target is acoustically audible to the listener; whether other sources are acoustically audible; and the optimal processing strategy and best sensor image to present to the listener. (When a target is far from the listener and close to a room microphone, listening to the HA internal mic response is often not the optimal choice!) In this Phase I project we propose to validate the sensor images SIRCE computes, quantify its ability to equalize reverberant speech, and estimate the overall improvement in speech intelligibility SIRCE delivers. The SIRCE system will help hearing aid users understand speech better in complex reverberant spaces.

Public Health Relevance Statement:
Project Narrative Hearing aid users find reverberant spaces, such as many classrooms, burdensome because speech is hard to understand in rooms with strong echoes. We propose to improve hearing healthcare by developing a hearing aid-compatible system that would determine the reverberation patterns of a particular room, and deliver an enhanced rendition of speech sounds from which interfering echoes have been “scrubbed”. This product would use multiple microphones permanently placed in a classroom, office, or other reverberant space to separate individual sound sources, identify and remove echoes, and upon request deliver an enhanced speech signal, without obstructive reverberations, to the hearing aids of listeners in the area.

Project Terms:
abstracting; acoustic imaging; Acoustics; Adult; Algorithms; Architecture; Area; base; blind; Characteristics; Complex; cost; design; Ear structure; Effectiveness; Elements; Employee Strikes; Environment; Healthcare; Hearing; Hearing Aids; hearing impairment; Hearing problem; Image; improved; Individual; innovation; interest; Measures; Methods; Noise; novel; Pattern; Performance; Phase; Process; Reporting; response; sensor; signal processing; Signal Transduction; sound; Source; Specific qualifier value; Speech; Speech Intelligibility; Speech Sound; success; Support System; System; targeted imaging; Time; United States National Institutes of Health; virtual

Phase II

Contract Number: ----------
Start Date: ----    Completed: ----
Phase II year
----
Phase II Amount
----