SBIR-STTR Award

Multi-Modal Command Interaction
Award last edited on: 6/3/2008

Sponsored Program
SBIR
Awarding Agency
DOD : DARPA
Total Award Amount
$1,635,991
Award Phase
2
Solicitation Topic Code
SB012-007
Principal Investigator
Philip R Cohen

Company Information

Adapx Inc (AKA: Natural Interaction Systems LLC)

400 Union Avenue South East Suite 200
Olympia, WA 98501
   (206) 428-0800
   info@adapx.com
   www.adapx.com
Location: Multiple
Congr. District: 10
County: Thurston

Phase I

Contract Number: DAAH0102CR051
Start Date: 10/31/2001    Completed: 9/30/2003
Phase I year
2001
Phase I Amount
$262,540
This project attempts to develop an architecture that will enable us to transition multimodal (speech and sketch) technologies for a variety of C3I tasks to the DoD. For example, users will be able to create courses of action, collaborate with other users, invoke simulators, etc. by speaking and sketching on tablet computers, PDAs, wearable, wall-sized, and paper-based systems. In virtue of a multiagent architecture and interoperation frameworks (e.g., the CoABS Grid), advanced interface technologies will be able to interoperate with DoD information systems. Phase I will involve analysis and extension of our multimodal architecture, particularly to support "intelligent paper." It will also involve designing experiments to assess the strengths and weaknesses of multimodal technology in the field. Phase II would then involve further development of the multimodal architecture and conduct of those experiments. If this research and development effort is successful, warfighters will be able to interact with command and control systems using speech and sketch. They will be able to do so in a variety of circumstances, and with equipment in a variety of form factors. Notable among these are tablet computers, PDA, and intelligent paper. The latter will offer ultra portability, the resolution and well understood failure modes of paper, but will also offer the benefits of digital systems. Users will be able to save substantial time in interacting with existing C3I systems, such as MCS, and will be able to transition from a method in which both paper maps and computer systems are used. Rather, employing just "intelligent paper," the user will be able to get both sets of advantages simultaneously, thereby halving the workload. Thus, we anticipate being able to overcome warfighters' resistance to adopting digital systems by providing an interface that does not fail, and engenders confidence.

Phase II

Contract Number: DAAH01-03-C-R199
Start Date: 7/2/2003    Completed: 4/4/2007
Phase II year
2003
Phase II Amount
$1,373,451
This project attempts to develop an architecture that will enable us to transition multimodal (speech and sketch) technologies for a variety of C3I tasks to the DoD. For example, users will be able to create courses of action, collaborate with other users, invoke simulators, etc. by speaking and sketching on tablet computers, PDAs, wearable, wall-sized, and paper-based systems. In virtue of a multiagent architecture and interoperation frameworks (e.g., the CoABS Grid), advanced interface technologies will be able to interoperate with DoD information systems. Phase I will involve analysis and extension of our multimodal architecture, particularly to support "intelligent paper." It will also involve designing experiments to assess the strengths and weaknesses of multimodal technology in the field. Phase II would then involve further development of the multimodal architecture and conduct of those experiments

Keywords:
human computer interface, multimodal interaction, voice, sketch, maps