SBIR-STTR Award

LUCID (Land Use Classification Intelligence Discovery)
Award last edited on: 5/23/2023

Sponsored Program
SBIR
Awarding Agency
DOD : NGA
Total Award Amount
$997,966
Award Phase
2
Solicitation Topic Code
NGA213-001
Principal Investigator
Yuri Levchuk

Company Information

Intelligent Models Plus Inc (AKA: IMP Inc)

250 South Whiting Street Unit 814
Alexandria, VA 22304
   (202) 421-7618
   N/A
   www.intelligentmodelsplus.com
Location: Single
Congr. District: 08
County: Alexandria city

Phase I

Contract Number: N/A
Start Date: 4/13/2022    Completed: 4/18/2024
Phase I year
2022
Phase I Amount
$1
Direct to Phase II

Phase II

Contract Number: HM047622C0016
Start Date: 4/13/2022    Completed: 4/18/2024
Phase II year
2022
Phase II Amount
$997,965
LUCID (Land Use Classification Intelligence Discovery) is a novel artificial intelligence (AI) system that achieves on-demand classification of fine-grained Land Use and Land Cover (LULC) categories to create detailed LULC-annotated geospatial grid maps by orchestrating seamless fusion of satellite imagery, remote sensing, and non-imagery geospatial data layers. LUCID fuses multiple data sources/modalities (e.g., satellite imagery; content and volume data from social media and cell phones; trajectory and transportation flow data; etc.) to classify a large number of fine-grained LULC categories, while also detecting novel (e.g., non-traditional) land uses and proposing concomitant semantic labels to extend semantic coverage of the derived LULC classification. LUCID integrates five fundamental pillars that make feasible its performance goals: Automated extraction of modality-agnostic Semantic LULC Knowledge Graphs from all-source data (e.g., from satellite imagery, UAV video, unstructured text, and other sources); Automatic acquisition of semantic relations between fine-grained urban LULC classes and heterogeneous data sources/modalities and delineation of their semantic hierarchy; Relational and Mutual Information Calculus to automate semantic enrichment of imagery data with complementary intelligence from other non-imagery sources/modalities; Deep Sense-making models with Hierarchical Self-Attention and hybrid Neuro-Symbolic Machine Reasoning over Semantic Graphs for Fine-grained LULC classification; and Efficient training and low-effort human curation of cognitively-friendly, inter-operable AI/ML/DL pipelines to jointly automate flexible on-demand fusion of heterogeneous sensors/modalities and facilitate fine-grained urban LULC classification.