7-ми октомври, 2020, 16 часа
Meeting ID: 807 949 5175
Лекциите след 7.10.2020 ще бъдат в сряда 16-18 ч.
Meeting ID 848 6765 0183
The goal of course is to introduce tensor numerical methods designed for the solution of the multidimensional problems in scientific computing and data analysis. These methods are based on the rank-structured approximation of multivariate functions and operators by using appropriate tensor decompositions (formats). The old and new rank-structured tensor formats are presented, as: canonical, Tucker, hierarchical, tensor train, quantized tensor train formats and their generalizations, which leads to tensor networks, i.e. representation of high dimensional tensors as interconnected low dimensional tensors in variety of ways. Under suitable conditions these formats allow a stable representation and a reduction of the data size from exponential complexity (with respect to the dimension of the space) to a linear complexity, which kills the curse of dimensionality. Another goal of the course is to present variety of relatively novel unsupervised machine learning methods using matrix and tensor decompositions, as latent variable analysis based on (1) statistical independence and (2) sparsity assumptions, kernel and generalized principal component analysis, and non-negative tensor decomposition, all of which reveal hidden patterns in the data.
The objective of this course is to provide the theory of tensor methods and decompositions combined with algorithms for practical analysis of big data. The course will help the students to solve high dimensional problems arising in data mining, image and signal processing, computational biomedicine, etc.
List of Course Outcomes:
By the end of the semester the students will be familiar with the various tensor formats, as: canonical, Tucker, hierarchical, tensor train, quantized tensor train formats and their generalizations, tensor networks. They will be able to apply some algorithms to decompose high dimensional tensors. The students will learn the basic theory and some applications of: kernel and generalized principal component analysis, independent component analysis, sparse component analysis, non-negative tensor decomposition, and will be in position to apply corresponding algorithms to analyze practical data.
Prerequisites: The students are expected to have knowledge of calculus, matrix and linear algebra, and optimization.
- Boris N. Khoromskij, Tensor Numerical Methods in Scientific Computing, (Radon Series on Computational and Applied Mathematics), De Gruyter, 2018
- Wolfgang Hackbusch, Tensor Spaces and Numerical Tensor Calculus, Springer-Verlag, 2012
- Andrzej Cichocki, Rafal Zdunek, Anh Huy Phan, Shun-ichi Amari, Non-negative matrix and tensor factorizations, John Wiley & Sons, 2009.