Assured Information Security, Inc. proposes CODEX, an explainability research effort to enable the practical application of Reinforcement Learning (RL)-based Artificial Intelligence (AI) for widescale use in both military and commercial systems. CODEX is a research effort to investigate and develop novel and effective XRL techniques using world-models that show a user both what the RL agent expects will happen after it makes a decision and also develops counterfactual examples (i.e., what ifs) to show a user what the RL agent expects would have happened had it made a different decision. Counterfactuals enable a user to better understand the agents limitations and understand why it chose a given path. This comparison of counterfactual or hypothetical future events enables global post-hoc explanation of the RL agent, intended to build trust in the RL agent overall rather than explain a single agent decision.