SBIR-STTR Award

TREAD
Award last edited on: 6/16/2023

Sponsored Program
SBIR
Awarding Agency
DOD : AF
Total Award Amount
$995,130
Award Phase
2
Solicitation Topic Code
AF212-D004
Principal Investigator
Walt Wood

Company Information

Galois Inc (AKA: Galois Connections Inc)

421 Southwest Sixth Avenue Suite 300
Portland, OR 97204
   (503) 626-6616
   contact@galois.com
   www.galois.com
Location: Single
Congr. District: 03
County: Multnomah

Phase I

Contract Number: N/A
Start Date: 1/18/2022    Completed: 6/29/2023
Phase I year
2022
Phase I Amount
$1
Direct to Phase II

Phase II

Contract Number: FA8750-22-C-1002
Start Date: 1/18/2022    Completed: 6/29/2023
Phase II year
2022
Phase II Amount
$995,129
Machine Learning (ML) has become the dominant way to perform data-driven classification and decision making, owing to its superior scores on widely used benchmarks. Reinforcement Learning (RL) systems have been successful in part due to their ability to cope with high-dimensional data by learning to leverage complex sub-signals within a sensory field more precisely than a typical hand-written policy. However, as with other data-driven techniques, such algorithms can fail catastrophically in conditions outside of their nominal range, and debugging or otherwise ascertaining robust explanations behind the decisions produced by these algorithms is quite challenging. Common methods for explaining ML-based decisions exist, such as LIME and Grad-CAM, though temporal extensions to these explanation methodologies have several shortcomings: 1) they do not provide any means of simulating possible futures to investigate what end state was desired by the closed-loop policy; 2) they do not provide an ability to contextualize actions as furthering specific long-term objectives; and 3) they do not provide a neighborhood of validity for each decision, preventing operators from reasoning about the confidence and robustness of each decision. We propose Training and Analysis Environment for Explainable Autonomous Decisions (TREAD), a framework for training RL agents and explaining their decisions via techniques that work with, rather than against, the algorithms’ ability to process high-dimensional data and exploit maximum context for decision making. The framework allows for high-definition inspection of the contributing factors that affected each decision. This will address shortcomings in prior work by 1) leveraging world models to allow for direct simulation of the agent's perceived dynamics; 2) using reward redistribution to directly associate decisions or decision sequences with the specific long-term objectives they advance; and 3) by leveraging adversarial explanations to give structure and robustness to decision boundaries, allowing for counterfactual investigations of the complete, temporal, non-linear decision process. These algorithms have never been combined before, but TREAD demonstrates how they work together to solve the shortcomings present in state-of-the-art explanations. The proposed framework will significantly advance the state of the art in explainable RL, and provide a prototype suitable for transition and commercialization. We predict thamt this framework will greatly improve the ability of human operators to understand, and subsequently trust, ML-based decision systems in both fully autonomous decision systems and human-machine teaming scenarios.