SBIR-STTR Award

Scene Registration Augmented Reality as an Educational Tool to Identify Underlying Anatomy during Medical Simulation Training
Award last edited on: 3/8/2024

Sponsored Program
SBIR
Awarding Agency
DOD : DHA
Total Award Amount
$5,156,985
Award Phase
2
Solicitation Topic Code
DHP163-002
Principal Investigator
Win Liu

Company Information

Sharp Vision Software LLC (AKA: SVS)

11767 Katy Freeway Suite 215
Houston, TX 77079
Location: Single
Congr. District: 07
County: Harris

Phase I

Contract Number: W81XWH-17-C-0071
Start Date: 4/12/2017    Completed: 11/11/2017
Phase I year
2017
Phase I Amount
$149,919
A scene registration augmented reality proof-of-concept for medical education is proposed with the goal of making medical education, especially anatomical learning, intuitive, effective, accurate, and productive by leveraging the latest augmented reality technologies. To achieve the goal, following aspects will be investigated and developed: 1) recognize the fiducial markers on a mannequin or on a standard patient using the built-in camera on mobile devices such as iPad, or on AR devices such as HoloLens; 2) bring up the corresponding 3D AR anatomical model with the proper orientation relative to the mannequin or standard patient; 3) allow the user to interact with the 3D anatomical model by zooming in/out, moving, rotating, and manipulating multi-layers; 4) allow the user to lock the 3D AR anatomical model to look around the model at all angles even when the markers are out of sight of the camera; 5) provide access to relevant educational materials such as articles, images, and videos; 6) allow the instructor to share scenes during the instruction. In addition, 3D scene registration technologies will be closely monitored and tested to explore the possibility of bringing up anatomical models using the registered 3D scenes of a mannequin or a standard patient without

Phase II

Contract Number: W81XWH-18-C-0327
Start Date: 00/00/00    Completed: 00/00/00
Phase II year
2018
(last award dollars: 2019)
Phase II Amount
$5,007,066

Based on the findings of Phase I on the latest scene detection technologies including Vuforia, Apples ARKit on iOS11, Microsoft Spacial Mapping on HoloLens, and Unity Multiplayer, a prototyping effort using a mixed approach to scene detection is proposed for Phase II to take advantage of the available technologies to create the best user experience possible for anatomical training. Dynamic 3D models of neck and leg with layers and details will be created and decimated in order for them to work smoothly and responsively on AR devices. A mixed method using the latest multicast and multiplayer technologies will be prototyped in Phase II to create the most effective and productive teaching and learning experience in classroom or over the Internet at remote sites. The usefulness of the proposed approach and the models created will be demonstrated in the user example of cricothyroidotomy and fasciotomy procedures. Upcoming technologies in AR hardware devices and software packages will also be closely monitored and tested during the period of Phase II to take advantages of the latest technologies, such as 3D object recognition without the need of makers, lighter and ruggedized devices for military use, and better screen sharing technologies.