This proposal addresses the significant need for supervisory control of sensor networks within a fully-immersive synthetic environment with novel Human-Machine Interfaces (HMIs). A methodology and process to design a Synthetic Environment Machine Interface System (SEMIS) with multi-modal inference processing based on gestures and speech is detailed. The Phase 1 Work Plan employs the Rational Unified Process and AGILE software methodologies to ensure a focus on a supervisory operator""s needs and system goals. Examples of innovative proposed functionality include: a collaborative unmanned sensor group, sensor action ring overlay, HMI sensor toolkits, gesture table, 3D holograms, temporal timeline, health and status displays and data product tools. A cognitive analysis plan is utilized for analysis of problem domain functionality with respect to operator workload, varying levels of automation, and portrayal of information for sensor monitoring and re-tasking. A software testbed is also created for analysis and testing of proposed algorithms and HMI visualizations. At the end of Phase I, results of the research of are presented coupled with a limited demonstration of the SEMIS with the AFRL""s ICEbox in a supervisory operator mission scenario. Market segments are defined for commercialization, including PED systems, UxV ground control stations, medical imaging, and Cyber Warfare markets.