The Video National Imagery Interpretability Rating Scale (VNIIRS) defines different levels of interpretability based on the types of tasks an analyst can perform with videos of a given VNIIRS rating. DoD users of motion imagery rely on NGA to rate the interpretability of motion image clips and understand the factors affecting the VNIIRS of operational imagery. To develop and validate and verify an automated system for video quality assessment according to the VNIIRS standard, it is necessary to compare the automation results with ground-truth results from human analysts, as VNIIRS is a subjective rating scale. In this project, Intelligent Fusion Technologies, Inc., proposes a VNIIRS ground-truth experiment to crowd-source assessments from human analysts with minimum training. The proposed experiment is assisted by a video tagging and interpretability rating (VTIR) toolkit aiming to reduce the variation of the VNIIRS assessment results from human analysts. With VTIR toolkit, human analysts assign non-integer VNIIRS levels to given test videos to facilitate the establishment of ground-truth VNIIRS levels.The proposed ground-truth experiment considers analyst proficiency level (APL) as not all human analysts have the same visual skill levels and incorporates it into the determination of the ground-truth VNIIRS values of test video segments.