This proposal is to develop a methodology and an Artificial Intelligence (AI)-based system for calibrating operator trust in the ethical behavior of lethal robots. The purpose is to enable the human-robot team to function at its highest level of effectiveness from the standpoint of ethical behavior as well as technical capability. We have done extensive work on the calibration of operator trust in the technical capabilities of automated systems. A paper including our former employee and present consultant Dr. Ewart de Visser, âMeasurement of Trust in Human-Robot Collaborationâ (de Visser et al, 2007), is a seminal work in the field and is frequently referenced by other researchers and developers. The present proposal builds on our previous work, as described in Section 3. In Phase I of the present project we will thoroughly review the recent literature on ethics of lethal robots while finding an AF customer to assist in refining our technical approach for Phase II; in Phase II we will develop a prototype system for operator ethical trust calibration and validate it in a proof-of-concept demonstration; in Phase III we will adapt our system to selected Air Force and other service applications and begin our commercialization activiti