Current projects

blurred-deblurred image

Image deblurring

Camera shake and object motion generate motion blur in images. When motion blur is perfectly known then restoring the sharp image is possible. However, when motion blur is known only up to some error, the restored sharp image might contain artifacts. This is also what happens when blur is not known at all, as one typically gradually improves the blur estimate via alternating minimization. We drastically reduces artifacts by introducing a novel image prior robust to blur errors.

reflectance

Light field superresolution

In a single snapshot light field cameras provide a sample of light position and direction. When objects are Lambertian, we show how light field images can be used to obtain surfaces of the scene and texture at about the sensor resolution for extended depth of field and digital refocusing.

coded photography

Coded photography

Out of focus blur increases with the distance from the focal plane. Thus, by measuring the blur extent one can obtain the depth of an object. Solving this task is ambiguous, as one cannot distinguish between a sharp image of a blurry object and a blurry image of a sharp object. However, the ambiguity is reduced if the shape of the blur is not a disk. We explore methods for depth estimation from defocus information by testing different masks and achieve state-of-the-art results.

reflectance

Uncalibrated photometric stereo

Photometric stereo computes 3D geometry from images with different illumination conditions. When illumination is unknown, then the problem has the so-called generalized bas-relief ambiguity. Prior work solves the ambiguity by using heuristics that depend on the ambiguity itself, and thus give non-unique and non-consistent answers. We solve the problem by exploiting Lambertian reflectance maxima and achieve state-of-the-art results with the highest computational efficiency.

reflectance

Subspace clustering

Principal component analysis is perhaps one of the most ubiquitous methods to analyze data and reduce its dimensionality. This method is designed for data that lives in a single subspace. When data lives in multiple subspaces, a novel approach is needed. We model data in multiple subspaces via the self-expressive model A = AC, where A is the matrix whose columns are the data points, and C is a matrix with the coefficients of the linear combinations. Despite the nonlinearity of the model, we show how to exactly recover both A and C in a number of optimization problems.

shape learning

Shape learning

Given a few views of an object, humans are surprisingly good at predicting how new views will look like, even if this is the first time they see that object. Whether they do it implicitly or explicitly, they somehow know how to deal with the 3D shape of objects. Does this ability come from a lot of visual experience? Does all the information lie in the few examples we have observed? We explore the latter hypothesis. Currently, we are focusing on the first image analysis step: How can one perform segmentation/grouping regardless of the object texture?

reflectance

Segmentation of dynamic textures

Natural events, such as smoke, flames or water waves, exhibit motion patterns whose complexity can be described be stochastic patterns of dynamic textures. We show how stochastically homogeneous regions can be found in video sequences and automatically segmented with invariance to viewpoint.



Past projects

multiview stereo

Multiview stereo

Depth estimation from multiple views is known to be sensitive to occlusion and clutter. We propose a globally optimal solution that is robust to these challenges. We do so by studying how depth maps are integrated together in a single 3D representation (a level set function).

reflectance

Real-time structure from motion and virtual object insertion

Given a monocular video sequence portraying a rigidly moving scene, one can recover both the camera motion and the 3D position of points or surfaces in the scene. We have demonstrated the first real-time system based on Kalman filtering at the leading computer vision conferences. The system incorporates a point tracker and can deal with outliers and changes of illumination.

ocular

Retinal imaging


The 3D shape of the optical nerve is used to diagnose glaucoma in fundus images. We show the difference between images captured by a monocular and a stereo fundus camera, and define when one can truly recover 3D structures from these images.