Producers of imagery intelligence must contend with the distortions and defects in available images. One approach to recovering some the lost spatiotemporal video content during single frame analysis is to use processing techniques that improve spatial quality and resolution of individual frames by exploiting inter-frame correlations. However, the assumptions, enhancement capabilities, and computational speeds of many of existing techniques are inadequate for accurate real-time reconstructions of still frames from general distressed video. Multiframe blind super-resolution methods have been demonstrated to produce high-quality reconstructions and require little or no a priori information about the scene and the optical system. Some of these methods have been combined with blind deconvolution methods to simultaneously increase image resolution, compensate for atmospheric and motion effects, and mitigate noise. Nanohmics, Inc. proposes to develop a comprehensive real-time video enhancement system for multi-GPU architectures by combining some of the best features of spatially-adaptive and super-resolution extensions to Online Blind Deconvolution methods. The reconstruction quality and speed of prototype implementations will be established through numerical experiments and extrapolated to identify optimization strategies for achieving the program goals. Nanohmics, Inc. plans to leverage its existing real-time, multi-GPU framework for Online Blind Deconvolution turbulence compensation to accelerate prototype development.