My primary research interest is to improve spatial resolution in long-range images. Ultimately, my goal is to build an imaging system that can resolve features on the order of microns from a kilometer away. Spatial resolution is limited by factors intrinsic to the camera system such as diffraction, finite sampling size, readout noise, and aberrations as well as extrinsic factors such as atmospheric turbulence, scattering, shot-noise, and object motion. Achieving high-resolution images at long distances will require overcoming research challenges in the aforementioned areas. I am currently involved in ongoing projects to beat diffraction blur and to characterize scattering media. My additional interests in computational photography include: time-of-flight imaging, multi-modal image fusion, hyperspectral imaging, and flexible camera arrays.
Toward Long Distance, Sub-diffraction Imaging Using Coherent Camera Arrays (project page)
J. Holloway, M.S. Asif, M.K. Sharma, N. Matsuda, R. Horstmeyer, O. Cossairt, and A. Veeraraghavan
Transactions on Computational Imaging, September 2016
Generalized Assorted Camera Arrays: Robust Cross-Channel Registration and Applications (project page)
J. Holloway, K. Mitra, S.J. Koppal, A. Veeraraghavan
Transactions on Image Processing, March 2015
Flutter Shutter Video Camera for Compressive Sensing of Videos (project page)
J. Holloway, A.C. Sankaranarayanan, A. Veeraraghavan, S. Tambe
International Conference on Computational Photography (ICCP), 2012
SocialSync: Sub-frame Synchronization in a Smartphone Camera Network (project page)
R. Latimer, J. Holloway, A. Veeraraghavan, and A. Sabharwal
ECCV Workshop: Light Fields for Computer Vision, 2014
Styrofoam: A Tightly Packed Coding Scheme for Camera-Based Visible Light Communication (project page)
R. LiKamWa, D. Ramirez, and J. Holloway
MobiCom Workshop: Visible Light Communication Systems, 2014
Image Classification in Natural Scenes: Are a Few Selective Spectral Channels Suffcient? (project page)
J. Holloway, T. Priya, A. Veeraraghavan, and S. Prasad
International Conference on Image Processing (ICIP), 2014
Synthetic Apertures for Visible Imaging Using Fourier Ptychography
In long-range imaging, spatial resolution is predominantly limited by diffraction blur. Diffraction blur is a fundamental limit that is determined by the diameter of the lens used in the imaging system. In principle, the diameter of a lens can be increased to circumvent diffraction. In reality, cost and manufacturing limitations place a limit on the maximum diameter that can be achieved. Therefore, computational methods are required to super-resolve the observed, blurry image and recover spatial resolution lost to diffraction.
Macroscopic Fourier ptychography is proposed as a practical means to create a synthetic aperture for visible imaging to achieve sub-diffraction limit spatial resolution. In this thesis, two principle barriers to implementing Fourier ptychography are addressed and resolved. First, a prototype imaging system is introduced to recover high-resolution long distance images in a reflection imaging geometry. Second, an image space regularization technique is developed to reconstruct optically rough surfaces that exhibit speckle. Experimental results demonstrate, for the first time, a macroscopic Fourier ptychography imaging system to achieve sub-diffraction resolution of optically rough objects in a reflection geometry. Spatial resolution is increased six-fold over any single captured image.
Increasing Temporal, Structural, and Spectral Resolution in Images Using Exemplar-Based Priors
Abstract: In the past decade, camera manufacturers have offered smaller form factors,
smaller pixel sizes (leading to higher resolution images), and faster processing chips to
increase the performance of consumer cameras. However, these conventional approaches
have failed to capitalize on the spatio–temporal redundancy inherent in images, nor have they
adequately provided a solution for finding 3D point correspondences for cameras sampling
different bands of the visible spectrum. We pose the following question: Given the repetitious
nature of image patches, and appropriate camera architectures, can statistical models be used
to increase temporal, structural, or spectral resolution?
We propose a two-stage solution to facilitate image reconstruction; 1) design a linear camera system that optically encodes scene information and 2) recover full scene information using prior models learned from statistics of natural images. By leveraging the tendency of small regions to repeat throughout an image or video, we are able to learn prior models from patches pulled from exemplar images. The quality of this approach will be demonstrated for two application domains, using low-speed video cameras for high-speed video acquisition and multi-spectral fusion using an array of cameras. We also investigate a conventional approach for finding 3D correspondence that enables a generalized assorted array of cameras to operate in multiple modalities, including multi-spectral, high dynamic range, and polarization imaging of dynamic scenes.
Light, Palo Alto, CA
One month consultancy on early computational photography stages with the flagship product
Adobe Research, San Jose, CA
Summer intern working in the Imagination Lab of Adobe Research under the guidance of Sunil Hadap
Texas Instruments, Dallas, TX
Summer intern working in the Imaging Branch of the R\amp;D center under the guidance of Umit Batur