Current 3D reconstruction techniques from both SAR and EO sensors rely on observing a region of interest from a full 360 degree orbit, with an additional requirement for SAR reconstructions that the data collected come from multiple passes at different altitudes. These requirements are not feasible in a military operational scenario. There exists a need for methods to generate 3D models of both scenes and individual targets where the data is collected from a limited viewpoint. Kitware and Ohio State University propose to develop novel algorithms that will fuse data from both EO and SAR sensors from limited viewpoints by applying deep learning to train algorithms that will leverage the advantages of both sensor modalities and use prior knowledge to reconstruct portions of the scene that were not observed due to the limited viewpoint. These new algorithms will be incorporated into the open source TeleSculptor photogrammetry application, providing a quick transition of these new capabilities to operational use.