SBIR-STTR Award

An Optimized Spatial Audio System for Virtual Training Simulations
Award last edited on: 4/7/2006

Sponsored Program
STTR
Awarding Agency
DOD : Navy
Total Award Amount
$797,969
Award Phase
2
Solicitation Topic Code
N04-T014
Principal Investigator
Hesham Fouad

Company Information

VRSonic Inc

2533 Wilson Boulevard Suite 200
Arlington, VA 22201
   (703) 248-3200
   contact@vrsonic.com
   www.vrsonic.com

Research Institution

University of Central Florida

Phase I

Contract Number: N00014-04-M-0210
Start Date: 7/1/2004    Completed: 4/30/2005
Phase I year
2004
Phase I Amount
$99,872
The objective of this proposal is to develop the requisite architecture and theoretical framework for incorporating optimized spatial auditory cues in Virtual Reality (VR) Military training systems used in training Close Quarters Battle for Military Operations in Urban Terrain (CQB for MOUT). Increasingly, combat operations are occurring in the urban environments where soldiers are fighting non-traditional warfare in unfamiliar territory. Military training systems will have to adapt to this reality by providing effective, deployable and contextual training to deployed troops. The proposed effort will lay the groundwork for developing such a training system with a primary focus on finding suitable cues for CQB room clearing tasks. Several steps are needed in order to fulfill this objective. First, auditory scene analysis will be carried out in order to determine important auditory cues for room clearing. Second, a framework and architecture will be developed for integrating optimized spatial auditory cues into training simulations. And third, research experiments will be designed and conducted in order to explore the effect of spatialization fidelity on training a room clearing task. The end result will be a dynamic, multi-modal, and deployable training system for individual combatants in CQB for MOUT environments

Phase II

Contract Number: N00014-05-C-0339
Start Date: 9/21/2005    Completed: 3/20/2007
Phase II year
2005
Phase II Amount
$698,097
VR based training systems have the advantage of being portable, deployable and reconfigurable and can thus provide effective training in the field where itÂ’s most needed. Regrettably, these systems have not yet been shown to be effective in providing positive training transfer in real world applications. Analysis of such systems suggests that in order to be effective, they must fully integrate the wide range of sensory cues associated with performing complex tasks. The objective of this effort is to develop the requisite science and technology for incorporating effective auditory cues in VR based training systems. Through the application of rigorous experimental research, an auditory display science will be established providing guidelines for auditory scene design. These guidelines will establish best practices in the use of real auditory cues, metaphoric cues and also will also establish fidelity requirements in order to maximize training effectiveness. Technology development will provide the capability to encode these best practices into auditory scene design tools for VR based training systems. The tools will enable personnel in the field to develop training scenarios in an interactive, guided process that ensures that best practices in auditory scene design are adhered to so that the training effectiveness is ensured.

Benefit:
Properly designed spatial audio displays are an important component in creating effective training simulation systems. However, there are practical difficulties involved in deploying such systems in operational settings. One important problem is providing the capability for personnel to adapt auditory cues in order to contextualize content while still maintaining good auditory design. This is important because trainers require the capability to modify training scenarios by customizing the content. Unfortunately, good auditory display design requires specialized knowledge that is not readily available in the field. Another problem is that, unlike visual displays, the performance of a spatial audio display system is affected by individual differences among listeners due to variations is physiology and localization ability. Auditory displays must therefore be individualized for each listener. Current techniques for doing this require specialized equipment, anechoic conditions and a significant amount of time making them impractical for real-world use. The proposed optimized auditory display technology promises to address these problems and enable the widespread use of spatial audio displays in training simulation systems. The proposed approach will enable the encoding of empirically validated, auditory scene design best practices into an auditory scenario design tool. This will, in effect, create a direct path for the knowledge gained from experimental study in training effectiveness to be directly incorporated into training simulation systems. The problem of listener individualization will also be addressed through the use of a best-fit display customization technique. Display performance will be optimized for a specific listener by modifying display parameters. The process will require no specialized equipment and will take 5-10 minutes to complete.

Keywords:
Virtual Reality, auditory display, Training, CQB for MOU