As the number of sensors on the battlefield constantly increases there is a great need for automating the processing of sensor data coming from multiple sources in order to reduce cognitive load and response time for manned systems and enable greater autonomy in unmanned systems. It is anticipated that the unprecedented access to sensor data (both in volume and variety) will lead to reduced false alarm rates and increased probability of detection of threats and targets. However, current signal and image processing solutions act primarily on the sensor data and directly associated metadata while ignoring the context of the scene leading to undesirable performance, especially in untrained environments, which places a burden on sensor operators in the field. We are proposing to develop a machine learning- and semantic reasoning-based system for threat and target detection, classification and sensor control, under a project termed FLOCK (Fusion of machine Learning with Ontological Classification from Knowledge). FLOCK combines the state-of-the-art image and signal processing capability developed by SwRI with the leading-edge logic-based semantic reasoning technology developed by VIStology. FLOCK will process multimodal streams of data, detect and classify targets and interpret scenes. It will learn both new targets and new reasoning rules.