SBIR-STTR Award

Generative Augmentation by Pose and Style Transfer to Amplify Radiant Imagery (GAP-STAR)
Award last edited on: 5/29/2023

Sponsored Program
SBIR
Awarding Agency
DOD : DARPA
Total Award Amount
$1,599,350
Award Phase
2
Solicitation Topic Code
NGA181-010
Principal Investigator
Christian Bruccoleri

Company Information

Lynntech Inc

2501 Earl Rudder Freeway South
College Station, TX 77845
   (979) 764-2200
   requests@lynntech.com
   www.lynntech.com
Location: Multiple
Congr. District: 10
County: Brazos

Phase I

Contract Number: HM047618C0062
Start Date: 9/17/2018    Completed: 6/15/2019
Phase I year
2018
Phase I Amount
$99,999
The objective of this project is to design a software architecture based on densely-connected neural network to perform automatic targetsegmentation and recognition using training datasets of limited size (low-shot). Deep learning architectures have proved to be extremelyeffective at object detection and recognition, but such capability comes at the cost of having large labeled datasets. Such datasets are notusually available for new threats, which appear continuously in a changing geo-political landscape. The NGA needs to be able to performautomatic target detection and recognition from aerial images when few prior examples of the target are available. The initial Phase I study willfocus on panchromatic Electro-Optical sensors, but the proposed Neural Network architecture is applicable to other types of sensors as well.Transition to other types of sensors is expected in the later stages of the project. The outcome of this project will provide the NGA and the DODwith the much needed capability of being able to detect and react quickly to new threats detectable from a variety of Intelligence, Surveillanceand Reconnaissance payloads.

Phase II

Contract Number: W912CG-21-C-0005
Start Date: 3/4/2021    Completed: 4/3/2022
Phase II year
2021
Phase II Amount
$1,499,351
State-of-the-art Machine Learning algorithms have demonstrated impressive capabilities to perform detection, classification, style-transfer, and in general computer-vision tasks on visible-wavelength imagery. However, the applicability of such methods to other imaging domains has received much less attention from the Computer Vision community. The lack of high-quality labeled datasets in non-visible domains is continuing to be a significant factor in slowing down commercial and DoD research in applying bleeding-edge Machine Learning methods to non-visible domains. The DoD has a growing need of robust and reliable autonomous systems, in which computer vision tasks will play an increasingly important role across all armed forces and data domains. From autonomous vehicles, such as UAVs, to battle-space-wide situational awareness for command-and-control tasks, there is a critical need to perceive, detect, understand, classify and react to dynamic threats by exploiting a range of sensors and imaging modalities that span a large portion of the electro-magnetic spectrum. In the proposed work Lynntech will evaluate the robustness and limitations of transfer learning, style transfer, and other related methods spanning a number of different tasks of interest to DARPA. Tasks of particular interest include classify and determine the pose of objects as imaged by non-visual wavelengths sensors. This problem becomes particularly difficult for certain classes of objects for which only a few images exist, such as Low-Shot instances. Lynntech will evaluate strategies such as data augmentation, simulations, and style transfer to be able to generate new labeled datasets in the imaging domains of interest, evaluate the performance of existing classifiers, and foster the development of new, more robust, and less brittle algorithms, trainable with limited initial data sets. During the first year of this Phase II effort, Lynntech will partner with other commercial entities to lay down the groundwork in generating labeled data sets in non-visible domains using simulations and style transfer. In year two, Lynntech will focus on the development and strengthening of classifier and detectors in such domains. The objective of the project is to create usable tools that can bridge the information gap between the visible domain and other imaging domains.