To provide complementary data and enable the synergistic gain from the fusion of multiple sensing modalities, there have been recent efforts to incorporate multiple sensors on the same platform. This approach may require vastly different detector material and apertures (optical lenses and antennas) due to the physics involved. However, within a limited spectral range (e.g., visible through infrared electro-optical wavelengths) one can envision an integrated multi-modality sensor that does not violate the laws of physics and takes advantage of the ongoing advances in microelectromechanical systems (MEMS) and nanotechnology. The miniature nature of these devices allow concepts that can be integrated at or near the detector focal plane, thus leading to compact size and much simplified alignment techniques. On-device co-alignment addresses one of the limitations of multiple single-mode sensors, which is the inability to collect perfectly aligned imagery across modalities thereby limiting the possible fusion gain. In addition to gains possible from integrated multi-modality sensors, there is an opportunity for improved ISR performance from sensors designed to optimally adapt based on feedback from integrated real-time exploitation algorithms in a performance driven sensing paradigm. BENEFIT
Keywords: Multi-Modal Sensors, Mems, Fusion, Data Exploitation