Lecture: Monday 12:15 - 14:00 o'clock in room H-C 7326, Wednessday 10:15 - 12:00 o'clock in room H-F 104/05. Exercises: Wednesday 12:15 - 14:00 o'clock in room H-A 7116
SWS/LP:
4+2SWS/10LP
Recommended for:
Master students in informatics, interested in optimization, mathematics and computer vision
Being able to determine the argument that minimizes a (possibly nonsmooth) convex cost function effeciently is of great practical relevance. For example, convex variational methods are one of the most powerful techniques for many computer vision and image processing problems, e.g. denoising, deblurring, inpainting, stereo matching, optical flow computation, segmentation, or super resolution. In this lecture we will discuss first order convex optimization methods to implement and solve the aforementioned problems efficiently. Particular attention will be paid to problems including constraints and non-differentiable terms, giving rise to methods that exploit the concept of duality such as the primal-dual hybrid gradient method or the alternating directions methods of multipliers. This lecture will cover the mathematical background for proving why the investigated methods converge as well as their efficient practical implementation.
We will cover the following topics:
Mathematical background
Convex sets and functions
Existence and uniqueness of minimizers
Subdifferentials
Convex conjugates
Saddle point problems and duality
Numerical methods
(Sub-)Gradient descent schemes
Proximal point algorithm
Primal-dual hybrid gradient method
Augmented Lagrangian methods
Acceleration schemes, adaptive step sizes, and heavy ball methods
Example applications in computer vision and signal processing problems, including
The exercise sheets consist of two parts, theoretical and programming exercises. The exercise sheets will be passed out in the lecture on Monday and you have one week to solve them. The solutions will be discussed in the exercises on Wednesday two days later.
Fast Optimization Challenge
During the course of the lecture, we will pose a challenge to solve an optimization problem as quickly as possible. The challenge ends on Monday 17.07 23:59 . The best solution will receive a prize. The challenges will be a good preparation for the final exam!
Master students in informatics being interested in computer vision and math
Conditions:
interest in computer vision and math
In the winter semester 2016/17 we will be offering a master student lecture on Variational Methods for Computer Vision. This lecture will give an introduction and overview of how to tackle different image and video processing problems with energy minimization methods.
The course will include lectures on Monday and Thursday from 16:00 - 17:30 in room H-C 7327, as well as weekly homework to be discussed in the exercise on Wednessday 14:15 - 15:45 in room H-C 7326.
Slides from previous lectures and exercise sheets can be found on Exercise and Materials. You will receive a password during the first lecture.
A brief description of the content to be discussed
The general idea of energy minimization methods is to define an energy E mapping an image to a real number in such a way that a low energy corresponds to an image with desired properties. As a simple example let us consider the task of image denoising. Assume you are given the following image
and someone asks you to remove the noise from this image. The noise-free image you are looking for should look similar to the noisy image you are given (after all, you still want to see motorcycles). You could come up with the idea to require ||u-f|| to be small, where u denotes you desired denoised image and f the noisy input image. On the other hand you desire the new image to contain less oscillations, so one could argue that the gradient of u should be small, too. This could lead to the energy minimization approach
The argument (=image) u that minimizes the above energy, yields quite nice results already:
In the lecture we will discuss the idea and several applications of energy minimization methods in more details. In case one understands an image as a function the corresponding energy is a functional, i.e., a function which maps functions to real numbers, giving rise to the name variational method. We will cover these aspects and extend the gradient penalization term from the above example to a more general concept called regularization. In the course of the lecture we will discuss how to formulate solutions to the following imaging and computer vision problems as variational methods:
Denoising (as seen above)
Deblurring, e.g. correcting camera movements during image recordings. As an example, one can recover the lower right image from the lower left image:
Super resolution, i.e. increasing the resolution of one or several images
Inpainting, i.e. filling in missing, lost or damaged parts as illustrated below
Demosaicing - a particular type of inpainting required in every standard digital camera (where standard means the camera has a color filter array)
Segmentation, i.e., the automatic detection of certain areas of interest. Below you find an example image for automatically detecting the green contours which are supposed to include cells from phase-constrast microscopy imaging.
Stereo Vision, i.e., the estimation of a depth map (right image) from two (rectified) views of the same scene (middle and left images):
Optical Flow, i.e., the estimation of motion between two images or successive frames of a video. Optical flow results are often color-coded, where the color indicates the direction and the brightness represents the magnitude. This can lead to quite funny illustrations of the results, see for instance this youtube video by the group of Thomas Pock.
3D Reconstruction, i.e., the estimation of the 3D geometry of a scene from several images from different (known) viewpoints.
Hyperspectral image unmixing, i.e. using the spectral information about the refleactance in more than 100 different frequencies of light to determine the amounts of certain materials visible in the image:
Possibly more - depending on the time we have. :-)