Universität Wien

250085 VO Tensor methods for data science and scientific computing (2022W)

6.00 ECTS (4.00 SWS), SPL 25 - Mathematik
ON-SITE

Registration/Deregistration

Note: The time of your registration within the registration period has no effect on the allocation of places (no first come, first served).

Details

max. 25 participants
Language: English

Examination dates

Lecturers

Classes (iCal) - next class is marked with N

Tuesday 04.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 05.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 11.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 12.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 18.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 19.10. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 25.10. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 08.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 09.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 15.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 16.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 22.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 23.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 29.11. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 30.11. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 06.12. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 07.12. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 13.12. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 14.12. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 10.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 11.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 17.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 18.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 24.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Wednesday 25.01. 11:30 - 13:00 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock
Tuesday 31.01. 15:00 - 16:30 Seminarraum 7 Oskar-Morgenstern-Platz 1 2.Stock

Information

Aims, contents and method of the course

This course will cover the basics of low-rank tensor decompositions, a modern computational tool for large-scale problems. Possible applications, which will be discussed in the course, are associated with such areas as data science, quantitative neuroscience, spectroscopy, psychometrics, arithmetic complexity and data compression; however, some of the most illustrative applications belong to the field of scientific computing. The course will first cover the canonical polyadic, Tucker, block-term and tensor-train decompositions from a linear-algebraic perspective and then focus on the use of low-rank tensor decompositions in computational mathematics. In the second part, the course will focus on the tensor-train (TT) decomposition, originally developed under the name of matrix product states (MPS) in computational quantum physics. This tensor decomposition appears naturally as a representation of functions from low-rank refinement in the construction of finite-element approximations and will be presented in this way in the course. In particular, in the context second-order linear elliptic problems, the low-rank approximation of functions, depending on their regularity, will be analyzed and state-of-the-art methods for preconditioning and solving optimality equations (linear systems) will be covered (including the construction, implementation and numerical analysis of such methods).

***

This course spotlights the intersection of two areas of modern applied mathematics:
* low-rank approximation and analysis of abstract data represented by multi-dimensional arrays
and
* adaptive numerical methods for solving PDE problems.
For the mentioned two areas, however seemingly disjoint, the idea of exactly representing or approximating «data» in a suitable low-dimensional subspace of a large (possibly infinite-dimensional) space is equally natural. The notions of matrix rank and of low-rank matrix approximation, presented in basic courses of linear algebra, are central to one of many possible expressions of this idea.

In psychometrics, signal processing, image processing and (vaguely defined) data mining, low-rank tensor decompositions have been studied as a way of formally generalizing the notion of rank from matrices to higher-dimensional arrays (tensors). Several such generalizations have been proposed, including the canonical polyadic (CP) and Tucker decompositions and the tensor-SVD, with the primary motivation of analyzing, interpreting and compressing datasets. In this context, data are often thought of as parametrizations of images, video, social networks or collections of interconnected texts; on the other hand, data representing functions (which often occur in computational mathematics) are remarkable for the possibility of precise analysis.

The tensor-train (TT) and the more general hierarchical Tucker decompositions were developed in the community of numerical mathematics, more recently and with particular attention to PDE problems. In fact, exactly the same and very similar representations had long been used for the numerical simulation of many-body quantum systems by computational chemists and physicists under the names of «matrix-product states» (MPS) and «multilayer multi-configuration time-dependent Hartree». These low-rank tensor decompositions are based on subspace approximation, which can be performed adaptively and iteratively, in a multilevel fashion. In a broader context of PDE problems, this leads to numerical methods that are formally based on generic discretizations but effectively operate on adaptive, data-driven discretizations constructed «online», in the course of computation. In several settings, such methods achieve the accuracy of sophisticated problem-specific methods.

***

The goal of the course is to introduce students to the foundations of modern low-rank tensor methods.
The course is to provide students with ample opportunity for starting own research.

Assessment and permitted materials

Oral examination with no aids («closed book»). Bonus points may be awarded for active participation and for work on optional projects and assignments.

Minimum requirements and assessment criteria

Examination topics

The theory and practice of the techniques covered in the course, as presented in the course.

Reading list


Association in the course directory

MAMV

Last modified: Mo 17.04.2023 11:49