Lecturers and abstracts
PhD Kallol Roy "Learning Disentangled Representations: from Methods to Applications"
University of Tartu, Faculty of Science and Technology, Institute of Computer Science
Deep neural networks are very successful at automatic extraction of meaningful features from data. Manual feature engineering is often not required and is extracted from end-to-end training. The focus instead shifts to designing the architecture of the deep neural networks. But, because of the complexity of deep neural networks, the extracted features are highly complex and noninterpretable by humans. The deep learning is treated as a black box and the emphasis is put on external evaluation metrics such as train and test error. For critical applications (autonomous driving, cybersecurity), it would be highly beneficial to understand what kinds of hidden (latent) representations the model has actually learned. Several methods exist in the literature for learning meaningful hidden representations. In this talk, I will be mainly looking at the Variational Autoencoder (VAE), a deep generative model. VAEs with adjustable hyperparameters have been shown to be able to disentangle simple data generating factors from highly complex input space. For example, when trained with images of faces, a VAE is able to learn to encode the direction of the lighting in a single hidden variable. At the end of the talk I will connect how VAE’s are important to extract the bias in the dataset.
Assoc. Prof. Arun Singh "Gradient Descent is Good but Not Great: Exploiting Mathematical Structures for Faster Optimization in Robot Motion Planning and Control Problems"
University of Tartu, Faculty of Science and Technology, Institute of Technology
Algorithms like Gradient Descent (and its variants), Gauss-Newton or Sequential Quadratic Programming are the workhorse of mathematical optimization and form a key foundation of many fields ranging from signal processing to machine learning to computer vision and robotics. These algorithms are very general and can be used to solve a large class of optimization problems. However, on the down-side, they are not equipped to exploit niche structures like bi-convexity in the underlying optimization problem.
In this talk, I will present some of my group’s recent works, where we show that some of the problems in robotics motion planning and control that were considered to be generic nonlinear have in fact some hidden structures like bi-convexity that can be exposed using some careful reformulations. This, in turn, motivates going beyond Gradient Descent and developing optimizer that can leverage these structures, and to this end, concepts from Alternating Direction Method Of Multipliers offers nice possibilities.
Prof. Samuel Pagliarini "A Lightweight Introduction to Hardware Security"
Tallinn University of Technology, School of Information Technologies, Department of Computer Systems
In this talk, prof. Pagliarini outlines what are the current challenges in Hardware Security and discusses some of the well-known attacks that exist today: architectural attacks (spectre/meltdown), side-channel attacks, hardware trojans, etc. Related issues such as integrated circuit piracy and countermeasures are also briefly discussed.
Prof. Pawel Maria Sobocinski "Compositional methods - an introduction"
Tallinn University of Technology, School of Information Technologies, Department of Software Science
Compositionality is a concept from the field of programming language theory. Roughly speaking, it is about designing languages in a way that the mapping to the semantics – their computational meaning – is homomorphic. Compositionality is an important ingredient of providing precise descriptions of computation, which is vital in ensuring correctness and trustworthiness. I will give an introduction, some examples and a brief overview of the work carried out in the Compositional Systems and Methods Group at Taltech since November 2019.