Für eine korrekte Darstellung dieser Seite benötigen Sie einen XHTML-standardkonformen Browser, der die Darstellung von CSS-Dateien zulässt.

News

You are here


Courses

Semester: 
Winter Term 2019/2020
Lecturer: 
Place/Time: 
Vorlesung: Montags 10:00-12:00 in H-F 001, Übungen: Donnerstags 14:00-16:00 in AR-A 1009
SWS/LP: 
2+2SWS, 5 Credits
News: 

Die Vorlesung am 4.11.2019 fällt aus.

Literatur zu der 3D Geometrie von Kameras, Bildern und deren Zusammenhang:
Buch: "An Invitation to 3-D Vision" Ma, Soatto, Kosecka, Sastry
Buch: "Multiple View Geometry in Computer Vision" Hartley und Zisserman

Segmentation

Digitale Bilder sind zur heutigen Zeit allgegenwertig und beinhalten häufig viele Informationen für den Betrachter. Diese Vorlesung beschäftigt sich mit der Entstehung, der Verarbeitung und der Repräsentation von digitalen Bildern. Es wird unter anderem die Rekonstruktion von digitalen Bildern behandelt, zu welcher beispielsweise das Entrauschen von verrauschten Bildern gehört, sowie die automatische Analyse digitaler Bilder, wie z.B. die Bildsegmentierung. Genauere Themen, die in diesem Kurs behandelt werden sind unter anderem:

  • Darstellung von digitalen Bildern
  • Bildentstehung
  • Bildinterpolation
  • Filter
  • Farbbilder und -transformationen
  • Segmentierungsverfahren
  • Clusterverfahren

In diesem Kurs wird es wöchentliche theoretische und praktische Aufgaben geben, deren Lösungen in den Übungen besprochen werden. Bei Fragen bzgl. der Übungen bzw. der Vorlesung kontaktieren Sie bitte Hannah Dröge per E-Mail: hannah.droege@uni-siegen.de

Practice Manager: 
Semester: 
Winter Term 2019/2020
Lecturer: 
Place/Time: 
Lecture: Thursday 12-14 in room AR-HB 030, Exercises: Mondays 14-16 in room PB-A 119
SWS/LP: 
2+2SWS, 5 Credits
Recommended for: 
Master students in informatics, master students in mechatronics
News: 
  • There will be a final lecture  on 27.01.2020, 14:15, in the usual Exercise room PB-A 119. There is no lecture on 30.01.2020.
  • The last lecture before Chrismas will be on Deep Learning in Industry. It will be on Wednessday Dec. 18th. This is the invitation!
  • The christmas challenge has been launched! For details, go to the exercise and materials page.

Deep learning is one of the most successful recent techniques in computer vision and automated data processing in general. The basic idea of supervised machine learning is to define a parameterized function, called a network, and optimize the parameters in such a way that the resulting function maps given inputs x to desired outputs y on a training set of pairs (x,y) -- a process referred to as training the network. The word deep learning describes the design of network architectures as a deeply nested function of simple building blocks. The ultimate goal of machine learning is to approximate the true underlying but unknown relation between the input and the output, such that the trained network is able to make good predictions even on examples outside of the training set -- an aspect referred to as generalization. 

The quickly evolving (if not exploding) field of deep learning has led to amazing applications. Here is a fascinating example of combining computer graphics, audio, and video processing with deep learning: https://www.youtube.com/watch?v=9Yq67CjDqvw

This lecture will give an introduction to deep learning, describe common building blocks in the network architectures, introduce optimization algorithms for their training, and discuss strategies that improve the generalization. In particular, we will cover the following topics:

- Supervised machine learning as an interpolation problem
- Simple network architectures: Fully connected layers, rectified linear units, sigmoids, softmax
- Gradient descent for nested functions: The chain rule and it's implementation via backpropagation
- Stochastic gradient descent on large data sets, acceleration via momentum and ADAM
- Capacity, overfitting and underfitting of neural networks
- Training, testing, and validation data sets
- Improving generalization: data augmentation, dropout, early stopping
- Working with images: Convolutions and pooling layers. Computing derivatives and adjoint linear operators
- Getting the network to train: Data preprocessing, weight initialization schemes, and batch normalization
- Applications and state-of-the-art architectures for image classification, segmentation, and denoising
- Architecture designs: Encoder-decoder idea, unrolled algorithms, skip connections + residual learning, recurrent neural networks
- Implementations in NumPy and PyTorch: Hands-on practical experience by implementing gradient descent on a fully connected network in NumPy. Introduction to the deep learning framework PyTorch for training complex models on GPUs.

Besides the lecture notes, the relevant literature for this course includes:

- "Deep Learning" by Ian Goodfellow, Yoshua Bengio and Aaron Courville (freely available online at http://www.deeplearningbook.org/)
- Coursera course "Machine Learning" by Andrew Ng
Further references to recent literature will be given in the lecture.

The discussion on the number of kinks (respectively number of linear regions) in network architectures can be found here https://arxiv.org/pdf/1402.1869.pdf

 
Practice Manager: 
Exercise operational: 

Please refer to  Exercise and Materials. If you need a password, please contact hartmut.bauermeister@uni-siegen.de.

Semester: 
Summer Term 2019
Place/Time: 
Thursdays, 14:15 - 15:45 in H-C 7326, Fridays 12:15 - 13:45 in H-F 112
SWS/LP: 
4 SWS, 5 LPs
Recommended for: 
Master students in informatics and mechatronics interested in Machine Learning
Conditions: 
Prior knowledge in programming, basic mathematics, and machine learning, where the latter can be gained from different modules including statistical learning theory, artificial inteligence, or deep learning
News: 

Course material is now ready to be downloaded here.

IMPORTANT: On Thursday April 18th, the lecture will be in room H-A 7118! It will be an introduction to PyTorch given by Hartmut Bauermeister. Please make sure to familiarize yourself with basics of NumPy in preparation for this class.

 
Turning Siegen into a painting - generated with https://deepart.io - based on the 2015 deep learning publication https://arxiv.org/abs/1508.06576

 

This module will present recent advances in machine learning in different fields of data sciences including imaging, vision, graphics, mechatronics, and sensorics. It addresses advanced techniques in the fields of machine learning, deep learning and artificial intelligence, with a particular focus on recent research papers, novel application areas and open questions in the aforementioned fields. Based on basic prior knowledge gained in other courses, this module specifically focuses on the state-of-the-art in machine learning by introducing recent publications from the leading international conferences on machine learning (e.g. NeurIPS, ICML, ICLR), computer vision (e.g. CVPR, ICCV, ECCV), or their application in fields like computer graphics, 3d reconstruction, robotics, navigation, medicine, or body-worn sensorics. After covering the theory of such works in the first half of the semester, a project phase will ask every student to implement and apply one of the discussed techniques on their own in one of the leading machine learning frameworks in the second half of the semester. The results of the project phase need to be presented to the class, and a short final report on the project will be the courses examination. 

This will be an exciting interdisciplinary lecture for advanced master students from mechatronics and computer science! We have 6 different lecturers from the center for sensor systems (ZESS) each of which will present a specific recent machine learning related research topic from his field, and offer a related project. Our lecturers are

  • Prof. Hubert Roth, Control Engineering, who will present deep learning in robotics - applications, challenges and potentials!
  • Prof. Kristof Van Laerhoven, Ubiquitous Computing, who will present Activity Recognition and Time Series Analysis with Convolutional Neural Networks!
Abstract: Nowadays, deep learning methods are not only used for image or text data. Especially in the last years, some exciting papers have been published, which focus on the application of neural networks to classify time series data from inertial measurement units. These inertial sensors are typically embedded in wearable devices (watches, fitness devices, smart glasses, etc.) and include 3D accelerometer, gyroscope and magnetometer sensors. 
 
In this lecture we would like to demonstrate the handling of time series data and convolutional neural networks in particular.  Here is an example from our research:

  • Prof. Volker Blanz, Media Computer Science, who will present deep learning techniques with applications in visual computing and perception!
  • Prof. Otmar Loffeld and Dr. Miguel Heredia Conde, Communications Engineering and Signal Processing, will present machine learning approaches for SAR imaging and Compressed Sensing.
  • Dr. Paramanand Chandramouli, Computer Graphics, who will present Deep Learning techniques for Computational Photography!
  • Prof. Michael Möller, Visual Scene Analysis, who will present approaches for fusing learning and model based reconstruction techniques.

More details are to follow! 


 
Let's make sensing smart!      
 
Semester: 
Summer Term 2019
Lecturer: 
Place/Time: 
Lecture: Monday 10:00 - 11:45 in room H-A 7106, Friday 8:30 - 10:00 in room H-A 7106. Exercises: Thursday 8:30-10:00 in room H-A 7116
SWS/LP: 
4+2 SWS / 10LP
Recommended for: 
Master students in informatics, interested in optimization, mathematics and computer vision
News: 

If you are interested in attending this lecture, write an email to Jonas Geiping or Michael Möller.

Being able to determine the argument that minimizes a (possibly nonsmooth) convex cost function efficiently is of great practical relevance. For example, convex variational methods are one of the most powerful techniques for many computer vision and image processing problems, e.g. denoising, deblurring, inpainting, stereo matching, optical flow computation, segmentation, or super resolution. Furthermore a clear understanding of convex optimization provides a baseline for further study of advanced non-convex or stochastic optimization techniques as encountered in deep learning, design or control problems.

In this lecture we will discuss first order convex optimization methods to implement and solve the aforementioned problems efficiently. Particular attention will be paid to problems including constraints and non-differentiable terms, giving rise to methods that exploit the concept of duality such as the primal-dual hybrid gradient method or the alternating directions methods of multipliers. This lecture will cover both the mathematical background, proving why the investigated methods converge, as well as their efficient practical implementation.

Convex Optimization

We will cover the following topics:

Mathematical background

  • Convex sets and functions
  • Existence and uniqueness of minimizers
  • Subdifferentials
  • Convex conjugates
  • Saddle point problems and duality

Numerical methods

  • (Sub-)Gradient descent schemes
  • Proximal point algorithm
  • Primal-dual hybrid gradient method
  • Augmented Lagrangian methods
  • Acceleration schemes, adaptive step sizes, and heavy ball methods

Example applications in computer vision and signal processing problems, including

  • Image denoising, deblurring, inpainting, segmentation
  • (Multinomial) logistic regression

Lecture

Location:  Room H-F 115, Hölderlinstraße 3

Time and Date: Monday 12:15 - 14:00, Tuesday 12:15 - 14:00

Start: April 1th, 2019, 12:15

Unisono Lecture: Unisono Link

Unisono Exercise: Unisono Link

The lecture is held in English. 

Exercises

Location: Room H-F 115  Hölderlinstraße 3

Time and Date: Monday 14:15 - 16:00

Start: April 8th, 2019
Exercise Webpage: Link
The lecture is accompanied by weekly exercises to solidify understanding of the material. The exercise sheets consist of two parts, theoretical and programming exercises.
The exercise sheets will be online on the exercise page each Monday and you have one week to solve them. Submission deadline is on the following Friday at 18:00 in the letterbox in front of H-A 7116 or via email. The solutions will be discussed in the exercises on the next Monday.

Fast Optimization Challenge

During the course of the lecture, we will pose a challenge to solve an optimization problem as quickly as possible. The challenge ends on Friday 21.06 23:59 . The best solution will receive a prize. The challenges will be a good preparation for the final exam!

Submission instructions: The source code should be sent via e-mail to michael.moeller@uni-siegen.de
 

Challenge: To be announced in the lecture

Leaderboard

Name Runtime Method
Michael Moeller 604 s Gradient descent (fixed step size)
     
     
Exam

The exam will be oral.

Practice Manager: 
Exercise operational: 

Please refer to the exercise webpage. For a password, please contact jonas.geiping@uni-siegen.de.

Semester: 
Winter Term 2018/2019
Lecturer: 
Place/Time: 
Lecture: Tuesdays 10:15-12:00 in H-F 104/05, Exercises: Friday 10:15-12:00 in H-C 6336/37
SWS/LP: 
2+2 / 5
Recommended for: 
Master students in informatics being interested in visual computing
News: 

Note that you need to register this lecture's exam in UNISONO until the 24.01.2019, otherwise it cannot be graded and no credit can be given!

This course will give an introduction to basic numerical methods that you will need in the field of visual computing and well beyond.  Topics we will cover in the course include

  • Error analysis and the condition of a problem: How accurately can I expect to determine a solution?
  • Linear equations: How to solve them efficiently?
  • Linear regression: How do I fit a (linear) parametric model to some measured data?
  • Nonlinear equations: Using Newtons method to solve nonlinear equations.
  • Nonlinear optimization: How do I apply Newtons method to smooth optimization problems.
  • Computation of eigenvalues: Which algorithm allows me to compute the eigenvalues of a matrix?
  • Interpolation: How can I interpolate given data points with polynomials?
  • Integration: How do quadrature rules for numerical integration work?

The course will have weekly homework on the theory as well as the implementation of the methods we discuss. We will discuss the solution to the homework in the weekly exercises. The exercise webpage can be found here. For any questions regarding the exercise or the lecture (including the password for the exercise page), please email Jonas Geiping at jonas.geiping@uni-siegen.de.

Besides the lecture slides I recommend the following sources for further readings

(English):

  • Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical recipes 3rd edition: The art of scientific computing. Cambridge university press 
    (Very detailed introduction to numerical methods from a quite applied  perspective, available in the university library)
  • Kincaid, D., Kincaid, D. R., & Cheney, E. W. (2009). Numerical analysis: mathematics of scientific computing (Vol. 2). American Mathematical Soc.  
    (Also a very detailed reference manual, but with a mathematical perspective)
  • Turner, P. R., Arildsen, T., & Kavanagh, K. (2018). Applied Scientific Computing: With Python. Springer 
    (Introduction to numerical topics with examples in Python, electronically available via the university library)
  • Lecture Notes on Numerical Analysis (Peter J. Olver, University of Minnesota, 2008), available online 

     

(German):

Practice Manager: 
Exercise operational: 

All information about the  exercise can be found at the exercise webpage.

Semester: 
Winter Term 2018/2019
Lecturer: 
Place/Time: 
Lecture: Tuesdays 14:15-16:00 in H-C 6336/37, Wednesdays 10:15-12:00 in PB-H 0103, Exercise: Mondays 14:15-16:00 in H-A 7118. The class starts on Tuesday October 9th.
SWS/LP: 
2+2SWS, 5 Credits
Recommended for: 
Master students in informatics, master students in mechatronics
News: 

EXAM:
Date: Monday, 4th of  March, 9:30 - 11:00
Location: H-C 3305

Q&A session in the exercise class on Monday, 28.01., 14:15 - 16:00: please prepare questions and send them to hartmut.bauermeister@uni-siegen.de.

NEW EVENT: Deep Learning in Industry

On Tuesday December 18th we will replace the usual lecture by external speakers from industry who will tell you how and where Deep Learning can be used in industrial applications. Our speakers are

  • Dr. Rachel Hegemann, Data Scientist at Deutsche Bahn AG, speaking on Digital Image Processing and Machine Learning in Applications.
  • Boris Lütke Schelhove, Senior Product Owner Business Innovations at OBI Group Holdinggiving a talk entitled "Beyond Product Recommendations - Deep Learning in Retail"
  • Dr. Christopher Pinke, Artificial Intelligence & Robotics Labs, Continental, giving a talk entitled "Mobility made easy"

Each speaker will talk for 25 minutes with 5-10 minutes time for question afterwards. After the three talks, we'll have a coffee break in which you have the chance to meet and talk to the speakers in person. I am very happy to have three excellent industrial researchers to give you some insights on image processing and machine learning in practice! 

The event will be at the Haardter-Berg-Schule (https://www.uni-siegen.de/start/news/bau/aktuelles/755374.html) in room AR-HB 0204, which is in the basement (two floors down). The talks will start at 14:10! Please make sure to be on time!

Wednesdays' exercise classes are canceled without substitution; please attend the classes on Mondays!

Due to the tremendeous interest in the class (with >40 students), we cannot all fit in the original classroom H-C 6336 on Tuesdays! Therefore, I decided to change the format of the class to a flipped-classroom! I will record (relatively condensed) videos covering the theory of the course's material beforehand, and will do exemplary computations, show proofs, have discussions about the video content, and answer questions within our meetings.

The meetings are now extended: You can either come on

  • Tuesdays 14:15-16:00 to Room H-C 6336, or on
  • Wednesdays 10:15-12:00 to Room PB-H 0103.

If you are free at both times, please attend the Wednesday's meeting. I will cover the same examples Tuesday and Wednessday - there is no reason to attend both classes. 

I will upload videos with new lecture content on Friday and expect you to have watched it until Tuesday/Wednesday! The links to the video lectures can be found in the password-protected area. 

Similarly, we cannot all fit into the exercise room H-A 7118 on Monday. Therefore, Hartmut will offer an additional exercise on Wednesdays from 14:15 - 16:00 in the same room H-A 7118! Please attend the latter if this is easily possible for you.

Please make sure you are registered for the class in UNISONO! Otherwise you cannot get credits for this course! The final exam will be written! It will take 90 minutes!

AR-HB 0204

Deep learning is one of the most successful recent techniques in computer vision and automated data processing in general. The basic idea of supervised machine learning is to define a parameterized function, called a network, and optimize the parameters in such a way that the resulting function maps given inputs x to desired outputs y on a training set of pairs (x,y) -- a process referred to as training the network. The word deep learning describes the design of network architectures as a deeply nested function of simple building blocks. The ultimate goal of machine learning is to approximate the true underlying but unknown relation between the input and the output, such that the trained network is able to make good predictions even on examples outside of the training set -- an aspect referred to as generalization. 

The quickly evolving (if not exploding) field of deep learning has led to amazing applications. Here is a fascinating example of combining computer graphics, audio, and video processing with deep learning: https://www.youtube.com/watch?v=9Yq67CjDqvw

This lecture will give an introduction to deep learning, describe common building blocks in the network architectures, introduce optimization algorithms for their training, and discuss strategies that improve the generalization. In particular, we will cover the following topics:

- Supervised machine learning as an interpolation problem
- Simple network architectures: Fully connected layers, rectified linear units, sigmoids, softmax
- Gradient descent for nested functions: The chain rule and it's implementation via backpropagation
- Stochastic gradient descent on large data sets, acceleration via momentum and ADAM
- Capacity, overfitting and underfitting of neural networks
- Training, testing, and validation data sets
- Improving generalization: data augmentation, dropout, early stopping
- Working with images: Convolutions and pooling layers. Computing derivatives and adjoint linear operators
- Getting the network to train: Data preprocessing, weight initialization schemes, and batch normalization
- Applications and state-of-the-art architectures for image classification, segmentation, and denoising
- Architecture designs: Encoder-decoder idea, unrolled algorithms, skip connections + residual learning, recurrent neural networks
- Implementations in NumPy and PyTorch: Hands-on practical experience by implementing gradient descent on a fully connected network in NumPy. Introduction to the deep learning framework PyTorch for training complex models on GPUs.

Besides the lecture notes, the relevant literature for this course includes:

- "Deep Learning" by Ian Goodfellow, Yoshua Bengio and Aaron Courville (freely available online at http://www.deeplearningbook.org/)
- Coursera course "Machine Learning" by Andrew Ng
Further references to recent literature will be given in the lecture.

The discussion on the number of kinks (respectively number of linear regions) in network architectures can be found here https://arxiv.org/pdf/1402.1869.pdf

 
Practice Manager: 
Exercise operational: 

Please refer to  Exercise and Materials. If you need a password, please contact hartmut.bauermeister@uni-siegen.de.

Semester: 
Summer Term 2018
Lecturer: 
Place/Time: 
Lecture: Monday 12:15 - 14:00 o'clock in room H-F 115, Tuesday 12:15 - 14:00 o'clock in room H-F 115. Exercises: Thursday 16:00 - 17:30 o'clock in room H-F 104/05
SWS/LP: 
4+2SWS/10LP
Recommended for: 
Master students in informatics, interested in optimization, mathematics and computer vision
News: 
  • Time and place of the exercise class has been changed to Thursday, 16:00 - 17:30 in room H-F 104/05!

  • The submission deadline for the exercise sheets has been postponed to Tuesday.

Being able to determine the argument that minimizes a (possibly nonsmooth) convex cost function efficiently is of great practical relevance. For example, convex variational methods are one of the most powerful techniques for many computer vision and image processing problems, e.g. denoising, deblurring, inpainting, stereo matching, optical flow computation, segmentation, or super resolution. In this lecture we will discuss first order convex optimization methods to implement and solve the aforementioned problems efficiently. Particular attention will be paid to problems including constraints and non-differentiable terms, giving rise to methods that exploit the concept of duality such as the primal-dual hybrid gradient method or the alternating directions methods of multipliers. This lecture will cover the mathematical background for proving why the investigated methods converge as well as their efficient practical implementation.

Convex Optimization

We will cover the following topics:

Mathematical background

  • Convex sets and functions
  • Existence and uniqueness of minimizers
  • Subdifferentials
  • Convex conjugates
  • Saddle point problems and duality

Numerical methods

  • (Sub-)Gradient descent schemes
  • Proximal point algorithm
  • Primal-dual hybrid gradient method
  • Augmented Lagrangian methods
  • Acceleration schemes, adaptive step sizes, and heavy ball methods

Example applications in computer vision and signal processing problems, including

  • Image denoising, deblurring, inpainting, segmentation
  • Implementation in MATLAB

Lecture

Location:  Room H-F 115, Hölderlinstraße 3

Time and Date: Monday 12:15 - 14:00, Tuesday 12:15 - 14:00

Start: April 9th, 2018, 12:15

Unisono: Unisono Link

The lecture is held in English. 

Exercises

Location: Room H-F 104/05  Hölderlinstraße 3

Time and Date: Thursday 16:00 - 17:00

Start: April 9th, 2018
The lecture is accompanied by weekly exercises to solidify understanding of the material. The exercise sheets consist of two parts, theoretical and programming exercises.
The exercise sheets will be online on the exercise page each Tuesday and you have one week to solve them. Submission deadline is usually on the following Tuesday at 18:00 in the letterbox in front of H-A 7116 or via email. The solutions will be discussed in the exercises on the next Thursday.

Fast Optimization Challenge

During the course of the lecture, we will pose a challenge to solve an optimization problem as quickly as possible. The challenge ends on Monday 18.06 23:59 . The best solution will receive a prize. The challenges will be a good preparation for the final exam!

Submission instructions: The source code should be sent via e-mail to michael.moeller@uni-siegen.de
 

Challenge: To be announced

Leaderboard

Name Runtime Method
Michael Moeller 604 s Gradient descent (fixed step size)
     
     
Exam

The exam will be oral.

Exercise operational: 

Please refer to  Exercise and Materials. If you need a password, please contact hartmut.bauermeister@uni-siegen.de.

Semester: 
Winter Term 2017/2018
Lecturer: 
Place/Time: 
Tuesdays 10:15-12:00 in room HF 104/105, exercise Fridays 10:15-12:00 in room H-C 6336
SWS/LP: 
2+2 / 5
Recommended for: 
Master students in informatics being interested in visual computing

In the winter semester 2017/18 we will be offering a course on numerical methods for visual computing. This course will give an introduction to basic numerical methods that you will need in the field of visual computing and well beyond.  Topics we will cover in the course include

  • Error analysis and the condition of a problem: How accurately can I expect to determine a solution?
  • Linear equations: How to solve them efficiently?
  • Linear regression: How do I fit a (linear) parametric model to some measured data?
  • Nonlinear equations: Using Newtons method to solve nonlinear equations.
  • Nonlinear optimization: How do I apply Newtons method to smooth optimization problems.
  • Computation of eigenvalues: Which algorithm allows me to compute the eigenvalues of a matrix?
  • Interpolation: How can I interpolate given data points with polynomials?
  • Integration: How do quadrature rules for numerical integration work?

The course will have weekly homework on the theory as well as the implementation of the methods we discuss. We will discuss the solution to the homework in the weekly exercises.

Here you can find the exercise sheets and slides from the course.

Besides the lecture slides I recommend the following sources for further readings (in German):

Semester: 
Winter Term 2017/2018
Lecturer: 
Place/Time: 
Mondays 8:15 - 10:00 and Tuesdays 14:15 - 16:00 in room H-C 6336
SWS/LP: 
4+2 / 10
Recommended for: 
Master students in informatics being interested in computer vision and math
Conditions: 
interest in computer vision and math

In the winter semester 2017/18 we are offering a master student lecture on Variational Methods for Computer Vision. This lecture will give an introduction and overview of how to tackle different image and video processing problems with energy minimization methods. 

 
The course will include two lectures per week, as well as weekly homework to be discussed in the weekly exercises.
 

Here you can find the material for the course, exercises, slides, etc. 

 
A brief description of the content to be discussed
The general idea of energy minimization methods is to define an energy E mapping an image to a real number in such a way that a low energy corresponds to an image with desired properties. As a simple example let us consider the task of image denoising. Assume you are given the following image
 
 
and someone asks you to remove the noise from this image. The noise-free image you are looking for should look similar to the noisy image you are given (after all, you still want to see motorcycles). You could come up with the idea to require ||u-f|| to be small, where u denotes you desired denoised image and f the noisy input image. On the other hand you desire the new image to contain less oscillations, so one could argue that the gradient of u should be small, too. This could lead to the energy minimization approach 

The argument (=image) u that minimizes the above energy, yields quite nice results already:

 

In the lecture we will discuss the idea and several applications of energy minimization methods in more details. In case one understands an image as a function the corresponding energy is a functional, i.e., a function which maps functions to real numbers, giving rise to the name variational method. We will cover these aspects and extend the gradient penalization term from the above example to a more general concept called regularization. In the course of the lecture we will discuss how to formulate solutions to the following imaging and computer vision problems as variational methods:
  • Denoising (as seen above)
  • Deblurring, e.g. correcting camera movements during image recordings. As an example, one can recover the lower right image from the lower left image:
         

 

  • Super resolution, i.e. increasing the resolution of one or several images
  • Inpainting, i.e. filling in missing, lost or damaged parts as illustrated below
 
  
 
  • Demosaicing - a particular type of inpainting required in every standard digital camera (where standard means the camera has a color filter array)
  • Segmentation, i.e., the automatic detection of certain areas of interest. Below you find an example image for automatically detecting the green contours which are supposed to include cells from phase-constrast microscopy imaging. 
  • Stereo Vision, i.e., the estimation of a depth map (right image) from two (rectified) views of the same scene (middle and left images):
     
  • Optical Flow, i.e., the estimation of motion between two images or successive frames of a video. Optical flow results are often color-coded, where the color indicates the direction and the brightness represents the magnitude. This can lead to quite funny illustrations of the results, see for instance this youtube video by the group of Thomas Pock. 
  • 3D Reconstruction, i.e., the estimation of the 3D geometry of a scene from several images from different (known) viewpoints. 
  • Hyperspectral image unmixing, i.e. using the spectral information about the refleactance in more than 100 different frequencies of light to determine the amounts of certain materials visible in the image:
     
  • Possibly more - depending on the time we have. :-)
 
 
Image credits: The first motorcycle image is part of the Kodak image database. The peppers image is a build-in image in Matlab with copyrights by Mathworks. The phase-contrast microscopy image was provided by Prof. Dr. Albrecht Schwab and his research group. The motorcycle used for the stereo reconstruction is part of the Middlebury stereo data set. The hyperspectral unmixing example is part of the IEEE Transactions on Image Processing publication A convex model for nonnegative matrix factorization and dimensionality reduction on physical space.
 

Semester: 
Summer Term 2017

Was, wo, wann? Sommerakademie der Studienstiftung des Deutschen Volkes, Roggenburg, 27.08.2017 - 02.09.2017

Titel der Arbeitsgruppe: "Konvexe Optimierung in der Datenanalyse"

Dozenten: Daniel Cremers, Michael Möller

 
 

 

 

Beschreibung:
Die riesigen Fortschritte in der Entwicklung neuer Sensor- und Messtechniken haben dazu geführt, dass in nahezu allen Bereichen der Wissenschaft die Akquirierung von Daten nicht länger der limitierende Faktor für die Gewinnung neuer Erkenntnisse ist.  So werden beispielsweise täglich über 1,8 Milliarden Bilder über soziale Medien im Internet verbreitet.  Die Nutzbarmachung von Informationen aus den Unmengen von verfügbaren Daten stellt die numerischen Algorithmen zu deren Analyse vor immer größer werdende Herausforderungen. 

Eine Vielzahl von interessanten Analyseverfahren, insbesondere im Bereich der Bildverarbeitung und des maschinellen Sehens, laufen auf konvexe Optimierungsprobleme hinaus.  Im Kurs werden die Teilnehmer die Grundlagen der konvexen Optimierung erlernen und anhand von praktischen Beispielen im Computer numerisch umsetzen.  Es werden grundlegende Begriffe wie konvexe Mengen und konvexe Funktionen eingeführt.  Wir behandeln das Problem der Optimierung unter Nebenbedingungen, das Konzept der Dualität und eine Reihe von numerischen Optimierungsverfahren, insbesondere projizierter Gradientenabstieg und Primal-duale Algorithmen.  In Projektarbeiten werden die vorgestellten Algorithmen numerisch implementiert und auf Probleme der mathematischen Bildverarbeitung angewendet. Als Beispiele werden unter anderem das Aufbereiten von Bilddaten durch Entfernen des Bildentrauschens sowie die automatische Extraktion bestimmter Objekte in Bildern, die sogenannte Bildsegmentierung, dienen. 

Der Kurs richtet sich an Studierende und Promovierende mit hinreichend mathemtischem Fachhintergrund, insbesondere an Mathematiker, Physiker, Informatiker, Elektrontechniker.  Eine gewisse Vertrautheit mit der Programmiersprache Matlab ist wünschenswert aber nicht notwendig.

In Kürze werden wir auf dieser Seite die Projekte für die Gruppenarbeit beschreiben. 

Pages

Subscribe to RSS - Courses