SBIR-STTR Award

Explainable and Transparent Machine Learning for Autonomous Decision-Making (EXTRA)
Award last edited on: 6/12/2023

Sponsored Program
SBIR
Awarding Agency
DOD : AF
Total Award Amount
$999,997
Award Phase
2
Solicitation Topic Code
AF212-D004
Principal Investigator
Genshe Chen

Company Information

Intelligent Fusion Technology Inc (AKA: IFT)

20410 Century Boulevard Suite 230
Germantown, MD 20874
   (301) 515-7261
   info@intfusiontech.com
   www.intfusiontech.com
Location: Single
Congr. District: 06
County: Montgomery

Phase I

Contract Number: 2022
Start Date: ----    Completed: 1/12/2022
Phase I year
2022
Phase I Amount
$1
Direct to Phase II

Phase II

Contract Number: N/A
Start Date: 7/10/2023    Completed: 1/12/2022
Phase II year
2022
(last award dollars: 1686563763)
Phase II Amount
$999,996

This effort aims to develop interpretable and reliable machine learning methods that address the challenge of deriving explanations of autonomous decision-making behavior. In particular, the proposed effort focuses on the challenges inherent in interactions between humans and intelligent machines, where transparency and trust are essential to facilitate successful human-machine teaming. This effort, based on deep reinforcement learning, will address the fact that autonomous decision-making agents can affect future states based on their current actions, as well as the challenge of reasoning over long-term human-machine collaborative objectives of the underlying mission. The resulting explainable system enables better understanding of learning outcomes, and can also help develop more effective machine learning algorithms. The developed systems can be applied to military scenarios by enabling human-interpretable behavior explanations in human-in-the-loop decision processes. These systems may also be deployed in commercial applications such as autonomous driving or energy management systems, where high-stakes decisions require transparency and traceability. The proposed explainable machine learning techniques can also be implemented in heavily regulated areas, such as healthcare or financial systems, where stringent interpretability and accountability are required. The objective of this effort is to conduct a feasibility study and validate prototype concepts for future development and integration.