SBIR-STTR Award

Web-Based Infrastructure for Comparison and Validation of Image Computing Methods
Award last edited on: 4/9/2019

Sponsored Program
STTR
Awarding Agency
NIH : NIMH
Total Award Amount
$1,214,487
Award Phase
2
Solicitation Topic Code
286
Principal Investigator
Stephen R Aylward

Company Information

Kitware Inc

1712 Route 9 Suite 300
Clifton Park, NY 12065
   (518) 371-3971
   kitware@kitware.com
   www.kitware.com

Research Institution

University of Utah

Phase I

Contract Number: 1R41EB011796-01A1
Start Date: 8/1/2010    Completed: 7/31/2011
Phase I year
2010
Phase I Amount
$237,264
Validation of computing algorithms has been a challenging topic over the last few years. In fact, several international workshops in the medical imaging field started to involve the community through grand challenges. A grand challenge involves selecting driving biological/scientific problem and asking experts to submit their best results and methods to solve it. Grand challenges often use blind verification in order to provide an unbiased validation. Validation is critical to science because it imposes to scientists a rigorous protocol before claiming the validity of their algorithms. Validation also ensures that algorithms are clinically viable and will perform with the same robustness and accuracy in the clinic. There is a clear consensus among the scientific community that careful validation is needed. However, validation still remains a challenge and can become a laborious task for several reasons. First, the overall design of the validation experiment should follow strict rules in order to be consistent with the scientific reasoning. For instance, if a registration algorithm uses landmarks as a base for registration, these same landmarks should not be involved during the validation process. Second, the testing and training datasets should be clearly identified and separated. The testing datasets should be used only for testing purposes and not to tune the algorithm. Third and last, the metrics used to measure the error of the algorithm should be relevant to the scientific goal of the research. For instance, only measuring the mean value of the resulting error of segmentation could have critical impact in the clinic if the maximum error is a very high value. Validation remains a difficult task and several tools have emerged to help scientists with validation tasks. The open source Insight Toolkit and Visualization Toolkit provide off the shelf algorithms for medical imaging, making comparison with other methods easier. Grand challenges for segmentation and registration, like the ones hosted at the Medical Image Computing and Computer Assisted Intervention, invite researchers to test their algorithms against each other providing a level of validation. However, no complete infrastructure is currently being offer to the research validation for collection and hosting validation tools. The aim of this proposal is to develop an infrastructure to help scientists to perform validation tasks. While considered an important element towards full clinical validation, the system does not aim to perform a full clinical validation, but rather help research choose the best tools for their clinical application. The proposed system, named COVALIC, provides an online repository of testing and training datasets, an open source framework for validation metrics and an infrastructure for hosting grand challenges and publishing validation results. Through the online system, researchers can perform validation tasks from the convenience of a web browser. Furthermore, COVALIC is built upon open access and open source, thus engaging the community in the effort and encouraging researchers to share their data, algorithms, metrics and results. We propose to develop and test the system with the help of six experts in the field: clinical researchers, surgeon, computer scientist, and scientific researchers, thus creating a system designed by the end user community.

Public Health Relevance:
Validation is a critical component of the development of computing methods and often present major challenges. The main difficulty in comparing performance of algorithms is to define a common reference for the training and testing datasets as well as validation metrics. The other challenge is to access other researchers' results and algorithms. We propose to develop an intuitive web-based system for collecting, distributing and processing validation algorithms. Additionally, we propose to develop an open-source framework for the validation of image processing algorithms.

Thesaurus Terms:
"algorithms; Automobile Driving; Back; Biological; Clinic; Clinical; Code; Coding System; Collection; Communities; Computer Assisted; Computers; Consensus; Data; Data Set; Data Sources; Dataset; Development; Dorsum; Drivings, Automobile; E-Mail; Educational Workshop; Electronic Mail; Elements; Email; Ensure; Evaluation; Goals; Image; Imagery; Infrastructure; International; Internet; Intervention; Intervention Strategies; Investigators; Laboratories; Lateral Ventricle Of Brain; Lateral Ventricles; Lateral Ventricle Structure; Manuals; Measures; Medical Imaging; Methods; Metric; Modeling; Names; On-Line Systems; Online Systems; Outcome; Performance; Procedures; Process; Protocol; Protocols Documentation; Publishing; Quality Control; Reporting; Research; Research Infrastructure; Research Personnel; Researchers; Retrieval; Rights; Running; Science; Scientist; Source Code; Speed; Speed (Motion); Supervision; Surgeon; System; System, Loinc Axis 4; Testing; Training; Validation; Visualization; Www; Workshop; Base; Blind; Clinical Applicability; Clinical Application; Computer Aided; Design; Designing; Driving; Experiment; Experimental Research; Experimental Study; Image Processing; Imaging; Insight; Interventional Strategy; Lateral Ventricle; Online Computer; Open Source; Prototype; Public Health Relevance; Repository; Research Study; Tool; Validation Studies; Web; Web Based; Web Interface; World Wide Web"

Phase II

Contract Number: 9R42MH106302-02
Start Date: 4/1/2010    Completed: 7/31/2016
Phase II year
2014
(last award dollars: 2015)
Phase II Amount
$977,223

We propose to develop the infrastructure for and deploy a commercial installation of an Algorithm EvaluationService (AES). The service will help bridge the gap between algorithm researchers and commercial productdevelopers. It will (1) assist commercial company in determining which medical image analysis and informaticsalgorithms they should integrate into their products, and (2) provide algorithm researchers with better access toclinically relevant amounts of data and with a better understanding of clinical and commercial needs.Specifically, the AES will provide a service whereby commercial organizations can post clinical data analysischallenges (data, metrics, and awards tied to performance milestones related to their intended products) andresearchers can easily incorporate those challenges into their algorithm development workflows. Our successful Phase I grant culminated with our prototype system maturing and serving as the onlineinfrastructure for the Multimodal Brain Tumor Segmentation (BRATS) Grand Challenge at MICCAI 2012 andthe Prostate Segmentation Grand Challenge at ISBI 2013. Herein, we propose to (Aim 1) extend our system to support a novel mechanism for algorithm submissionbased on virtual machine technology that addresses clinical integration (i.e., multi-step data processing,including human interaction), security, and computational resource scalability to support extensive testing. Wewill (Aim 2) extend existing software development tools (i.e., our popular CMake build system) to make thesubmission of algorithms to AES challenges an inherent and effortless part of algorithm development forresearchers. We will (Aim 3) validate the resulting system using additional grand challenges, and we willdeliver it to and receive feedback from our first commercial customer as part of the proposed work. Specifically,two academic groups (Ohio State University and The University of Utah) have agreed to conduct grandchallenges using our systems. Additionally, a commercial group (Imaging Endpoints) has agreed to serve asour first commercial customer. They are an imaging core lab that provides algorithmic solutions topharmaceutical companies and clinical research organizations. They will use our AES to post a client's dataand metrics, offer a prize, and thereby attract algorithm developers to generate solutions to their client'sproblem. It is generally accepted that a chasm exists between algorithm researchers, the capabilities of medicaldevices, and the needs of clinical practice. The proposed work will help bridge that chasm and will operate asa viable business model.

Thesaurus Terms:
Address;Algorithms;Award;Base;Bioinformatics;Brain Neoplasms;Businesses;Client;Clinical;Clinical Data;Clinical Practice;Clinical Research;Clinically Relevant;Clinically Significant;Computerized Data Processing;Computing Resources;Data;Data Analyses;Development;Evaluation;Feedback;Grant;Healthcare;High Performance Computing;Human;Image;Image Analysis;Image Processing;Image Registration;Imaging Informatics;Innovation;Manufacturer Name;Medical;Medical Device;Medical Imaging;Meetings;Methods;Metric;Modeling;Novel;Ohio;Online Systems;Performance;Pharmacologic Substance;Phase;Prize;Process;Product Development;Prostate;Prototype;Public Health Relevance;Publications;Research;Research Clinical Testing;Research Infrastructure;Research Personnel;Research Study;Resources;Running;Security;Services;Software Development;Solutions;Success;System;Techniques;Technology;Testing;Time;Tool;Universities;Utah;Validation;Virtual;Work;