SBIR-STTR Award

TRUST'M: Trust Resilience in User-System Team Modeling
Award last edited on: 9/18/2023

Sponsored Program
SBIR
Awarding Agency
DOD : AF
Total Award Amount
$977,238
Award Phase
2
Solicitation Topic Code
AF212-D005
Principal Investigator
Mary Freiman

Company Information

Aptima Inc

12 Gill Street Suite 1400
Woburn, MA 01801
   (781) 935-3966
   aptima_info@aptima.com
   www.aptima.com
Location: Multiple
Congr. District: 05
County: Middlesex

Phase I

Contract Number: 2022
Start Date: ----    Completed: 2/23/2022
Phase I year
2022
Phase I Amount
$1
Direct to Phase II

Phase II

Contract Number: N/A
Start Date: 8/23/2023    Completed: 2/23/2022
Phase II year
2022
(last award dollars: 1695042831)
Phase II Amount
$977,237

When human teammates have not properly calibrated trust towards the capabilities of their machine partner, they can exhibit all-or-nothing behavior. With too much trust, human teammates neglect to review their machine teammate’s work, and with too little trust human teammates ignore machine suggestions and feedback. Machine teammates also need to measure the human teammate’s trust levels to help the human to delineate task responsibilities, maintain awareness of the machine teammate’s capabilities, and maintain a competent sight picture of the operational space. Maintaining trust between human-machine teams is challenging when risk is high or perceived competence of a teammate changes, leading to team misalignment over time. The ability to establish, maintain, and repair trust is essential to maintain long-term teaming efficacy. To promote strong intra-team collaboration it is necessary to (1) set and maintain human trust in the intent of machine partners; (2) establish and reinforce the machine’s trust in the human’s assessment of competence; and (3) drive interventions to repair and re-align trust after acute changes in the team’s perceived competence. In response to this problem, Aptima will deliver Trust Resilience in User-System Team Modeling (TRUST’M), a system that models a human’s trust in a machine teammate, assesses the machine’s actual competence, and adjusts the machine’s behavior to calibrate the human’s trust. Aptima with their partners at Carnegie Mellon University (led by Dr. Cleotilde Gonzalez) have selected a task, AI teammate, and approach to model co-training with dynamic trust adjustment and develop a system for maintaining and repairing trust in human-machine teams. The task will be based on intelligence analyst use cases. The teammate will be a AI cognitive assistant called ALFRED developed by Aptima for Army analysts that recommends information for an analyst to review based on priority information requests (PIRs). The human teammate saves items that support their conclusions to reports associated with each PIR, and reject items and keywords that are irrelevant. The modeling approach for TRUST’M will include an instance-based learning theory (IBLT) model of human trust in ALFRED based on models of trust developed by Dr. Gonzalez and colleagues. TRUST’M will track the user’s behavior and feedback, determine the discrepancy between the human’s apparent assessment and TRUST’M’s assessment of ALFRED’s competence, and predict experiences (e.g. behaviors in ALFRED) to calibrate the human’s trust in ALFRED to the correct range. TRUST’M will dynamically adjust trust by changing ALFRED’s behavior to maintain the optimal trust levels and repair trust when over- or under-trust occurs.