SBIR-STTR Award

Multi-Source Imagery and Geopositional Exploitation (MSIGE)
Award last edited on: 11/9/2018

Sponsored Program
SBIR
Awarding Agency
DOD : Navy
Total Award Amount
$2,369,896
Award Phase
2
Solicitation Topic Code
N101-100
Principal Investigator
John Helewa

Company Information

Kab Laboratories Inc

1110 Rosecrans Street Suite 203
San Diego, CA 92106
   (619) 523-1763
   info@kablab.com
   www.kablab.com
Location: Multiple
Congr. District: 52
County: San Diego

Phase I

Contract Number: N66001-10-M-5103
Start Date: 8/26/2010    Completed: 2/26/2011
Phase I year
2010
Phase I Amount
$99,989
Full Motion Video (FMV), as a subset of Imagery Intelligence (IMINT), is an important accelerator in the Find, Fix, Track, Target, Engage, and Assess (F2T2EA) process. It can provide behavioral cues for a target that are difficult to discern from still images or non-visual sensor reporting alone. But, the natural form of video has several problems: it is cumbersome to review; it is bandwidth intensive to distribute; and it is not smartly integrated with a Common Operational Picture (COP). In addition, when it comes to supporting Time Sensitive Targeting (TST), slow analytical processes result in missed opportunities This Multi-Source Imagery and Geopositional Exploitation (MSIGE) SBIR response makes near real-time assessment, alerting and reporting between Intelligence Surveillance and Reconnaissance (ISR) and IMINT easier to perform. The proposed solution will provide innovative techniques to rapidly correlate geopositional data with FMV.

Benefit:
For DoD, the MSIGE solution will fulfill the SIGINT war fighters function to impact the Find, Fix, Track, Target, Engage, and Assess (F2T2EA) cycle by combining high-resolution imagery, live UAS video streams, and SIGINT or multi-INT geospatial reports. Further, the Department of Homeland Security (DHS) already uses UAS platforms for its border patrol efforts along U.S. borders to track people and vehicles. The MSIGE solution can directly assist that effort. FMV is used extensively by first responders during an incident. The MSIGE solution can provide a variety of tools to annotation FMV with meaning overlays and distribute that information in near-real time between multiple first responder organizations.

Keywords:
motion, motion, imagery, full, video, targeting, CORRELATE, intelligence, signal

Phase II

Contract Number: N66001-11-C-5223
Start Date: 9/7/2011    Completed: 9/4/2014
Phase II year
2011
Phase II Amount
$2,269,907
Under SBIR Topic N101-100 (Multi-Source Imagery and Geopositional Exploitation [MSIGE]), three Phase I performers developed capability concepts to address different aspects of the MSIGE problem set. In Phase 2, we propose to develop a prototype DCGS-N capability for Multi-INT ISR and Targeting Services (MITS) by developing three subsystems and integrating them under separate Phase II contracts. This approach will increase value to the DCGS-N PoR by providing a low-risk, rapidly transitionable, end-to-end capability. Three proposed subsystems: - STRIKE LINE (Ticom Geomatics) -- Sensor Cueing, Data Publish and Subscribe, Wide Area Network (WAN) Distributor -- MITS System Engineering and Integration Lead - VISION (KAB Labs) -- Presentation Layer, Local Area Network (LAN) Distributor, Video Processing Framework, Video/Multi-INT Indexing/Search - AFOS (Mosaic ATM) -- Geolocalization, FMV Metadata Decoder, Metadata Accuracy Enhancement, Feature Projection into Full Motion Video (FMV) MITS will provide the following high level capabilities for DCGS-N: - Cue imagery sensors with geopositional data to collect FMV on targets of interest - Combine FMV with other target data, provide an integrated display - Improve geopositional accuracy of objects in analyst-selected FMV - Index video repositories for rapid searching, near real-time, and post mission analysis - Distribute enhanced multi-INT data products

Benefit:
The research objective for VISION is to produce a Full Motion Video (FMV) technology that will accelerate the Find, Fix, Track, Target, Engage, and Assess (F2T2EA) process beyond the capabilities of today. In order to accomplish faster F2T2EA several enabling technologies are needed. These enabling technologies include, near real-time cross-cueing between sensors and the intelligence data, better situational awareness, and better reporting software. This effort will focus on areas of weakness, which include the ability to display information within FMV and within the Common Operational Pictures (COP) in relation to FMV, the ability to process and distribute video effectively, and address cross-cueing between information sources with FMV. Today, ISR sensors have ever expanding capabilities and information content. SAR, EO, IR and SIGINT capabilities have increased in quantity and quality. Fusion and correlation between these information sources has been hampered by structural separation between systems. There is a degree of functional separation between SIGINT and IMINT that naturally occurs among ISR sources. Some of it is cross-domain, but some is also due to differing expertise and training needs to accomplish the ISR exploitation. Video and imagery are handled by one software suite and SIGINT/EW/IO is handled by another system stack. Caught in the middle of the divide are analysts trying to fulfill a commanders tasking to gain awareness and to protect the own vessel. Decisions and awareness are often time-late, being hampered by manual cross-cueing and tedious manual assessment. A comprehensive ISR picture is not being provided and in the end, proper categorization and threat assessment are not as strong as they should be within a common operational picture. In addition, the fleet is currently experiencing an increase in non-conventional threats and commercial RF sources in todays littoral environments. There is a real problem in rapidly assessing all the information, because post collection analysis is not automated. Fusion needs to be automated, assessments better conveyed, identification and geolocation made more accurate and all this with minimal operator intervention. Detection, acquisition, identification, feature extraction, tracking, cross-cueing, and must occur in a more autonomic manner. This will free analysts to focus on assessment and courses of action in a more time-relevant manner. While this SBIR is in progress, KAB Laboratories realizes that technical paradigms will be changing. Our Phase I research identified the optimal indexing schemes for doing distributed search using new cloud technologies like Hadoop and MapReduce. We also suggested using the same technology that automated brokerage firms have used to rapidly assess the market: real-time Complex Event Processing (CEP) for cloud stream processing. The ability to use the same VISION technology to reach out to forward deployed forces is important as well. KAB has written VISION so it will work well within browsers using OWF and as native applications on tablets and handhelds. We demonstrated this during our Phase I, and will continue this during our Phase II. KAB plans on releasing mobile and browser based versions of its VISION technology for each release. The plan is that there should be no disadvantaged user based on the technology they pick.

Keywords:
multi-INT, video, full, IMINT, motion, targeting, Technology, SIGINT