SBIR-STTR Award

AI/ML Trust Analysis (AITRUST)
Award last edited on: 9/26/2022

Sponsored Program
SBIR
Awarding Agency
DOD : AF
Total Award Amount
$999,980
Award Phase
2
Solicitation Topic Code
AF212-D002
Principal Investigator
Ulrich Lang

Company Information

ObjectSecurity LLC

1855 First Avenue Suite 103
San Diego, CA 92101
   (650) 515-3391
   info@objectsecurity.com
   www.objectsecurity.com
Location: Single
Congr. District: 52
County: San Diego

Phase I

Contract Number: N/A
Start Date: 12/20/2021    Completed: 6/20/2023
Phase I year
2022
Phase I Amount
$1
Direct to Phase II

Phase II

Contract Number: FA875022C0075
Start Date: 12/20/2021    Completed: 6/20/2023
Phase II year
2022
Phase II Amount
$999,979
Trust and assurance of Artificial Intelligence/Machine Learning (AI/ML) based systems is still, to a large degree, a research topic. Currently, the state of research in trusted AI/ML is far from a state where we can, for example, prove that a non-trivial system behaves exactly as expected or where an AI/ML based system is able to explain, in detail, how it comes to a decision and therefore can fully be trusted. In the proposed work, our goal is to achieve a level of trust similar to standard, algorithmic and programmatic systems based on methods, techniques and tools we can use in practically relevant systems and embedded in the relevant modern development approaches. What we can expect is that AI/ML based systems meet well defined and realistic requirements and provide specific functionality within a given error rate. In the proposed work and as a short-term solution, we want to improve trust in these pattern matching related aspects of our system, which already is a very challenging undertaking. It not only includes specific AI/ML aspects, but also the system architecture as a whole, and its security and safety aspects. We propose to build on and extend our prior work where we mainly have to bring together two main threads: trust analysis and risk management in complex systems, and AI/ML in cybersecurity, in order to build an integrated solution for trust and assurance analysis and management in AI/ML based, complex systems. Our method and tool will be fully integrated into a model based, CI/CD and DevSecOps methodology and process, which we are already internally using for the development of our own systems. Our AITRUST solution has to be platform and system agnostic. This requires a highly flexible and adaptive risk management platform, which can integrate into different application platforms and AI/ML systems, as well as cover cybersecurity and AI/ML trust and risk aspects in an integrated and uniform way. Therefore, instead of building a monolithic trust analysis tool, we propose to apply the DevSecOps/CI/CD concepts to the AITRUST solution itself. We propose to implement the functionality of AITRUST as reusable, containerized microservices and to reuse cybersecurity/AI/ML functionality, both during development and at runtime as much as possible, whether they’re already deployed in legacy systems or a part of applications platforms, containers and systems. This includes exploration, testing and scanning functionality, analysis of explainability and interpretability, and an agile graphical user interface that supports developers at different skill levels. A specific focus of the proposed work are trusted training data and baselines for anomalies detection. Our integrated AITRUST solution and tool will greatly improve the development of trusted AI/ML systems.