SBIR-STTR Award

iGlasses: an Appliance for Improving Speech Understanding in Face-to-Face Communication and Classroom Situations
Award last edited on: 12/28/2023

Sponsored Program
SBIR
Awarding Agency
NSF
Total Award Amount
$661,787
Award Phase
2
Solicitation Topic Code
SS
Principal Investigator
Michael Cohen

Company Information

Animated Speech Corporation (AKA: ACS)

2261 Market Street Suite 293
San Francisco, CA 94114
   (800) 701-9025
   N/A
   www.animatedspeech.com
Location: Single
Congr. District: 12
County: San Francisco

Phase I

Contract Number: 0839802
Start Date: 1/1/2009    Completed: 6/30/2009
Phase I year
2008
Phase I Amount
$99,944
This Small Business Innovation Research (SBIR) project will advance the state of the art in human machine interaction, speech, machine learning and assistive technologies. The innovation in the proposed research is to develop and test the technology required to design an embellished eyeglass, which will perform continuous real-time acoustic analysis of the interlocutor's speech and transform several continuous acoustic features of the user's speech into continuous visual features displayed on the eyeglasses. Pilot research has demonstrated that it is possible to recognize robust characteristics of isolated auditory words and to transform them into visible features in real time. The proposed research extends this research to sentences along with tests of different feature detectors and automatic recognition models. The proposed activity will impact society by providing a research and theoretical foundation for a system that would be available to all individuals at a very low cost. It does not require literate users because no written information is presented as would be the case in a captioning system; it is age-independent in that it might be used by toddlers, adolescents, and throughout the life span; it is functional for all languages because it is language independent given that all languages share the same phonetic features with highly similar corresponding acoustic characteristics; it would provide significant help for people with hearing aids and cochlear implants; and it would be beneficial for many individuals with language challenges and even for children learning to read

Phase II

Contract Number: 0956881
Start Date: 2/1/2010    Completed: 1/31/2012
Phase II year
2010
Phase II Amount
$561,843
This Small Business Innovation Research (SBIR) Phase II project will complete the development of technology to supplement ordinary face-to-face language interaction for the millions of individuals who are deaf or hard of hearing or face other speech/language challenges. The goal of the project is to enable such individuals to fully participate in the spoken language community. The need for language and speech intelligibility aids is pervasive in today's world. Millions of individuals live with language and speech challenges (such as 36 million Americans with hearing deficits), and these individuals require additional support for communication and language learning. The Phase I research developed and tested the behavioral science and technology for iGlasses. Building on this research, the proposed research is to complete and bring to market an innovative intervention that can bring spoken language and culture into the lives of individuals who are currently marginalized because of hearing loss or other speech/language challenges. The proposed research will advance the state of the art in human machine interaction, speech, machine learning, and assistive technologies. The broader/commercial impact of this project will benefit the deaf and hard-of-hearing populations as well as the scientific community by providing a research and theoretical foundation for a speech aid that would be naturally available to almost all individuals at a very low cost. It does not require literate users because no written information is presented as would be the case in a captioning system; it is age-independent in that it might be used by toddlers, adolescents, and throughout the lifespan; it is functional for all languages because it is language independent given that all languages share the same phonetic features with highly similar corresponding acoustic characteristics; it would provide significant help for people with hearing aids and cochlear implants; and it would be beneficial for many individuals with language challenges and even for children learning to read. Finally, regardless of the advances or lack of advances in speech recognition technology, it will always be more accurate and effective to pick off the fundamental acoustic features of speech than it is to recognize entire phonemes which are more complex combinations of these basic properties