Machine Learning (ML) has become the dominant way to perform data-driven classification and decision making, owing to its superior scores on widely used benchmarks. Reinforcement Learning (RL) systems have been successful in part due to their ability to cope with high-dimensional data by learning to leverage complex sub-signals within a sensory field more precisely than a typical hand-written policy. However, as with other data-driven techniques, such algorithms can fail catastrophically in conditions outside of their nominal range, and debugging or otherwise ascertaining robust explanations behind the decisions produced by these algorithms is quite challenging. Common methods for explaining ML-based decisions exist, such as LIME and Grad-CAM, though temporal extensions to these explanation methodologies have several shortcomings: 1) they do not provide any means of simulating possible futures to investigate what end state was desired by the closed-loop policy; 2) they do not provide an ability to contextualize actions as furthering specific long-term objectives; and 3) they do not provide a neighborhood of validity for each decision, preventing operators from reasoning about the confidence and robustness of each decision. We propose Training and Analysis Environment for Explainable Autonomous Decisions (TREAD), a framework for training RL agents and explaining their decisions via techniques that work with, rather than against, the algorithms ability to process high-dimensional data and exploit maximum context for decision making. The framework allows for high-definition inspection of the contributing factors that affected each decision. This will address shortcomings in prior work by 1) leveraging world models to allow for direct simulation of the agent's perceived dynamics; 2) using reward redistribution to directly associate decisions or decision sequences with the specific long-term objectives they advance; and 3) by leveraging adversarial explanations to give structure and robustness to decision boundaries, allowing for counterfactual investigations of the complete, temporal, non-linear decision process. These algorithms have never been combined before, but TREAD demonstrates how they work together to solve the shortcomings present in state-of-the-art explanations. The proposed framework will significantly advance the state of the art in explainable RL, and provide a prototype suitable for transition and commercialization. We predict thamt this framework will greatly improve the ability of human operators to understand, and subsequently trust, ML-based decision systems in both fully autonomous decision systems and human-machine teaming scenarios.