Unmanned aircraft systems (UAS) engaged in maritime operations are often equipped with electro-optical (EO) and infrared (IR) sensors for several mission profiles, including intelligence, surveillance, and reconnaissance (ISR), anti-submarine warfare (ASW), and countermine operations. These sensors provide imagery and metadata that describe the position and orientation of the UAS, the orientation of the sensor, the horizontal and vertical fields of view, and other data. The inclusion of metadata enables single images to be roughly geo-registered; however, when observing a distant target, these sensors suffer from limited field of view and lack of geographical context. Geographical information systems (GIS) can provide contextual information, but full-motion video (FMV) systems and GIS are currently not well integrated. GIS systems can usually ingest satellite imagery for the region of interest and provide operators with some degree of awareness for the context of the FMV imagery. However, satellite imagery of the ocean can be hours, days, or weeks old; it may be outdated to the point of worthlessness or even add negative value if the mission is time-critical. Given the availability of FMV-equipped UAS, the Navy wants to use those assets to capture up-to-the-minute imagery and generate broad area ortho-mosaic maps of the sea surface to aid in real-time situational awareness (SA), even in GPS denied areas. The Existing algorithms and software products can generate broad area ortho-mosaics, but they suffer from several limitations. First, the algorithms rely on landmarks and other fixed features that are usually not present in a maritime environment. The sea surface is relatively featureless and constantly changing, rendering any kind of landmark-based solution unusable. Second, the existing software requires considerable post-processing. By the time ortho-mosaics are available, they are based on data that is several hours old and no longer actionable. Third, In GPS denied environments, an aircrafts position is not reliable, so vision based approaches using visual odometry are necessary to help maintain accurate aircraft and sensor field of view position. Once a system generates a usable ortho-mosaic and updates it in real-time as new UAS imagery is received, it must also disseminate the data. Imagery, objects, threats, or other phenomena detected by the aerial survey must be communicable to other systems using standard data formats and protocols, such as Keyhole Markup Language (KML) or shape files. Charles River Analytics proposes to design and prototype Real-Time Ortho-Mosaicking for Awareness and Navigation (ROMAN) as a platform-agnostic system for real-time broad area sea surface ortho-mosaicking and object detection with visual odometry using EO and IR imagery from small UAS to improve maritime situational awareness (SA).
Benefit: We expect the full-scope ROMAN system to immediately and tangibly benefit target users across the DoD. In particular, ROMAN will provide real-time situational awareness and navigation capability in GPS denied areas using a continuously-updated ortho-mosaic composed of aerial reconnaissance imagery. Incorporating the innovations developed under ROMAN will improve situational awareness and decision-making capability for Naval operations in threat detection, anti-submarine warfare, counter-surveillance, and search-and-rescue. In the private sector, we see a commercially viable market licensing or selling ROMAN as a solution for real-time maritime or land-based mapping to state and local government agencies, including emergency management agencies, police departments, fire departments, and environmental monitoring agencies, and to UAS and aerial sensor manufacturers.
Keywords: Situational Awareness (SA), full-motion video (FMV), Ortho-mosaicing, object detection and tracking, Unmanned Aerial Systems (UAS), visual odometry (VO)