SBIR-STTR Award

AI-assisted Contingency Monitoring Enterprise
Award last edited on: 11/24/21

Sponsored Program
SBIR
Awarding Agency
DOD : AF
Total Award Amount
$50,000
Award Phase
1
Solicitation Topic Code
J201-CSO1
Principal Investigator
Nicolas Borensztein

Company Information

CrowdAI Inc

2300 Jane Lane
Mountain View, CA 94043
   (479) 459-2362
   info@crowdai.com
   www.crowdai.com
Location: Single
Congr. District: 18
County: Santa Clara

Phase I

Contract Number: FA8649-20-P-0784
Start Date: 3/9/20    Completed: 6/8/20
Phase I year
2020
Phase I Amount
$50,000
The proliferation of remote imaging sensors has generated more data than analysts can exploit. The promise of “decision advantage” from satellite imagery has given way to data overload with much of it left unexploited, put into archive. With as much imagery being collected and so little exploited, it is difficult to measure the lost opportunity. Artificial intelligence (AI), however, offers a remedy to make full use of this data. For this project, CrowdAI proposes ACME, the AI-assisted Contingency Monitoring Enterprise, which automates imagery exploitation using deep learning and computer vision. At CrowdAI, we have devised and built novel approaches to deep-learning, producing some of the highest performing computer vision (CV) algorithms on the market today. Our algorithms exploit cutting-edge techniques in image segmentation (Graphic 1 - center), using proprietary convolutional neural networks (CNN). CrowdAI segmentation models achieve superior results by incorporating, and iteratively improving upon, lower-quality training data in tight feedback loops. Thus, CrowdAI can exploit a significantly lower quantity of annotated images for a given performance requirement. Said differently, not only is the annotation process completed using fewer resources, by it has the potential to unlock deep-learning applications for rare objects and features for which abundant training data is difficult or impossible to obtain. To maximize flexibility, CrowdAI’s general models have been trained on data from multiple sensor platforms (commercial satellite, government airborne, government FMV, etc.) collected at different geometries over more than 125 countries of various geographies, biomes, and seasons. As a result, these “universal” models can be used in new locations with little-to-no retraining. Nevertheless, we continuously fine-tune or re-train models to meet customer requirements. Computer vision offers the means to exploit vast quantities of imagery data with speed and accuracy, twenty-four hours a day. In addition to reducing analytic burden during daily operations, CV models can run in the background against contingency targets, ensuring that facility baselines and their orders of battle (OB) are maintained. With little-to-no human intervention, CV models can detect, classify, and maintain counts of any object for which there is sufficient training data, including military vehicles, weapons platforms, vessels, facilities, and more. Model output can be converted from our “pixel perfect” polygons for each object to centroids or text to automatically populate NGA or DOD databases, such as Cedallion or MARS (MIDB). CV increases the speed and scale of GEOINT analysis. For the Analysis Directorate, CV is the difference between near-real-time automation versus spending untold hours conducting search through petabytes of imagery to maintain OB counts, to find deployed equipment, or monitor targets of national security interest to the US an

Phase II

Contract Number: ----------
Start Date: 00/00/00    Completed: 00/00/00
Phase II year
----
Phase II Amount
----