Advances in location-acquisition and mobile computing techniques have generated massive spatiotemporal trajectory data, which represent the mobility of a diversity of moving objects, such as people, vehicles, and animals. Moreover, recent research has tabbed learning of how to automatically explain and anticipate both the observable and trajectories as one of the likely keys to building the next-generation artificial intelligence. Producing a consolidated taxonomy of human interpretable labels (thumbnails) and learning to automatically label the trajectories data and convey the semantic meaning of observed movement patterns would greatly assist the human users in visualizing and performing trajectory mining tasks. To aid these objectives, we offer to develop TULPA (Trajectory-patterns elucidated via Unsupervised-and-semi-supervised Labeling by Prerequisites-free Autonomy), a machine learning system for data-efficient semi-supervised training of Deep Learning algorithms to automatically perform pattern based classification and human-interpretable labeling of complex trajectories autonomously extracted from heterogeneous ISR feeds.