lu.se

Matematikcentrum

Lunds universitet

Denna sida på svenska This page in English

Deep Learning Smorgasbord

In connection with the LCCC focus period http://www.lccc.lth.se in October/November about Learning and Adaptation for Sensorimotor Control http://www.lccc.lth.se/index.php?page=october-2018 we are organizing a PhD study circle on deep learning.

This can be seen as an independent continuation of the deep learning study circle that we organized in 2016. See the homepage http://www.control.lth.se/Education/DoctorateProgram/deep-learning-study-circle.html for more links and resources on the topic of deep learning.

This time the focus is to have presentations on how deep learning (or machine learning) are being used in current research projects. The study circle takes place at the seminar room (M:2112B) at the Department of Automatic Control. The room is on the second floor in the M-building. There are 12 more or less independent presentations on three days (16-18 October). Each day there are four presentations 10:15-11:00, 11:05-11:50, 13:15-14:00, 14:05-14:50. We start with coffee 10:00 and finish with coffee 15:00.

In every pass of 45 minutes is there a presenter (PhD Student, Postdoc, Guest).

For the PhD students the presentations could contain:

* Theory, (e.g on GAN, autoencoder, CNN, recurrent nn, ..)

* Research problem (motivation, question, data)

* A discussion on what deep learning network architectures to use.

* A discussion about hardware/software used, (single laptop, single desktop, GPU, clusters, cloud etc)

* Results of course.

The aim of the course is to learn more theory, learn from the different research examples and to exchange practical research experience.

If you want to follow the 'studiecirkel' as a PhD course and get credit for it contact Kalle Åström - kalle@maths.lth.se

 

=====================================

Programme

Room: M:2112B

16/10, 10:15 - Ida Arvidsson - Automatic Prostate Cancer Classification using Deep Learning

Slides used in the presentation: 

www2.maths.lth.se/matematiklth/personal/kalle/deeplearning2018/LCCC_DeepLearning_IA.pdf

For more information about autoencoders, 

read chapter 14 in the book by Bengio et al: www.deeplearningbook.org

study the slides from the 2016 study circle: http://www.control.lth.se/media/Education/DoctorateProgram/2016/Deep%20Learning/autoencoders.pdf

For more on the prostate cancer research read:

Generalization of prostate cancer classification for multiple sites using deep learning. Ida Arvidsson, Niels Christian Overgaard, Felicia Elena Marginean, Agnieszka Krzyzanowska, Anders Bjartell, Kalle Astrom & Anders Heyden, 2018 maj 23, 2018 IEEE 15th International Symposium on Biomedical Imaging, ISBI 2018. IEEE Computer Society, Vol. 2018-April, s. 191-194 4 s.

Semantic segmentation of microscopic images of H&E stained prostatic tissue using CNN. Isaksson, J., Ida Arvidsson, Kalle Åström & Anders Heyden, 2017 jun 30, 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings. Institute of Electrical and Electronics Engineers Inc., Vol. 2017-May, s. 1252-1256 5 s. 7965996

Automatic Gleason grading of H&E stained microscopic prostate images using deep convolutional neural networks. Gummeson, A., Ida Arvidsson, Mattias Ohlsson, Niels C. Overgaard, Agnieszka Krzyzanowska, Anders Heyden, Anders Bjartell & Kalle Aström, 2017, Medical Imaging 2017: Digital Pathology. SPIE, Vol. 10140, 101400S

16/10, 11:05 - David Gillsjö - Deep Learning examples

Slides used in the presentation: 

www2.maths.lth.se/matematiklth/personal/kalle/deeplearning2018/LCCC_DeepLearning_DG.pdf

For more information about Fast R-CNN, see

dl.acm.org/citation.cfm

arxiv.org/abs/1504.08083

github.com/rbgirshick/py-faster-rcnn

 

16/10, 13:15 - Martin Karlsson - RNN for Detection and Control of Contact Force Transients in Robotic Manipulation.

First, the concept of recurrent neural networks (RNNs) is introduced.

This is then applied to recognize robot joint torque sequences, in order
to determine when robotic sub-tasks have finished. That way, it can be
determined automatically when to move on to the next sub-task in the
task sequence.

RNNs are described in the book Deep Learning:
https://www.deeplearningbook.org/contents/rnn.html

Here are fun RNN examples:
http://karpathy.github.io/2015/05/21/rnn-effectiveness/

Here is a video of our robot determining when snap-fit operations have
finished. Sound recommended in last part:
https://www.youtube.com/watch?v=TE1q5lQr4nk

16/10, 14:05 - Fredrik Bagge - NeuralNeural networks for system identification.

Varför används neurala nätverk inte för alla uppgifter inom
systemidentifiering idag? Vi ska kika på hur traditionella trick för
träning av nätverk, såsom weight decay, skip connections och val av
aktiveringsfunktion, påverkar träning och stabilitet hos dynamiska
modeller. Vi ska även kika på smartare sätt att regularisera en dynamisk
modell.

17/10, 10:15 - Maximiliam Karl - Unsupervised Control

Empowerment has been shown to be a good model of biological behaviour in the absence of an extrinsic goal. It is defined as the channel capacity between actions and states and maximises the influence of an agent on its near future. It can be used to make robots balance and walk, without the need of inventing complex cost functions. We introduce an efficient method for computing empowerment and learning empowerment-maximising policies. Both methods require a model of the agent and its environment and benefit from system dynamics learned on raw data. For learning the system dynamics we use Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. We show the ability to learn useful behaviour on various simulated robots, including biped balancing; lidar-based flock behaviour; but also on real robot hardware in the form of quadrocopters with local sensing and computing.

 

17/10, 11:05 - Farnaz Adib Yaghmaie  - Reinforcement Learning for control of continuous-time systems

Machine learning can be divided into three categories: 1- Supervised learning, 2- Unsupervised learning and 3- Reinforcement Learning (RL). Within these categories, RL is specifically interesting, as it concerns with learning optimal policies from interaction with an environment and receiving a cost. In this sense, RL implies a cause and effect relationship between policies and costs, and as such, RL based frameworks enjoy optimality and adaptivity. In this talk, we consider RL from a control perspective; that is, we consider RL techniques for dynamical systems with continuous state and control-space. This is more demanding in comparison with RL for classical Markov Decision Processes (MDP) with a finite number of state and control variables since the stability of the dynamical systems as well as other control related properties need to be guaranteed.

 

17/10, 13:15 - Aleksis Pirinen - Policy gradients in reinforcement learning.

In this presentation I'll try to give some intuition behind policy gradients in reinforcement learning by discussing connections to supervised learning. I will also give a case study where policy gradients are used to learn a visual object localization policy for object detection. An intuitive intro to policy gradients, upon which I base some of the content of my presentation, is found here: karpathy.github.io/2016/05/31/rl/.

 

17/10, 14:05 - Erik Gärtner - INTRINSIC MOTIVATION - Exploration, curiosity and learning for learning’s sake

18/10, 10:15 - Adrianna R. Loback - Biologically Plausible Observer Neural Network Models of Brain Areas Involved in Spatial Navigation

Many higher-order brain areas – including the hippocampus and posterior parietal cortex (PPC), which are involved in spatial navigation and sensorimotor control, respectively – have access to only indirect information about the environmental variables they represent, and are hence observers at the system theoretic level.  Motivated by recent experimental neuroscience results, and by the observer framework from control engineering, we seek in this work to develop a data-driven theoretical framework for biologically plausible observer neural network models of the PPC and hippocampus.  We show that a general observer neural network model can reconcile two key experimental findings. To incorporate biological plausibility constraints, we focus on recurrent neural network architectures, and plan to incorporate biologically relevant plasticity rules.

18/10, 11:05 - Shreya Saxena - Performance Limitations in Sensorimotor Control: Tradeoffs between Neural Computing and Accuracy in Tracking Fast Movements

The ability to move fast and accurately track moving objects is fundamentally constrained by the biophysics of neurons and dynamics of the muscles involved. Yet, the corresponding tradeoffs between these factors and tracking motor commands have not been rigorously quantified. We use feedback control principles to identify performance limitations of the sensorimotor control system (SCS) to track fast periodic movements. We show that (i) linear models of the SCS fail to predict known undesirable phenomena produced when tracking signals in the "fast regime", while nonlinear pulsatile control models can predict such undesirable phenomena, and (ii) tools from nonlinear control theory allows us to characterize fundamental limitations in this fast regime. For a class of sinusoidal input signals, we identify undesirable phenomena at the output of the SCS, including skipped cycles, overshoot and undershoot. We then derive an analytical bound on the highest frequency that the SCS can track without producing such undesirable phenomena as a function of the neurons' computational complexity and muscle dynamics. Our modeling framework not only reproduces several characteristics of motor responses in both slow and fast regimes observed in humans and monkeys, but the performance limitations derived here have far-reaching implications in sensorimotor control. In particular, our analysis can be used to guide the design of therapies for movement disorders caused by neural damage by enhancing muscle performance with assistive neuroprosthetic devices.

18/10, 13:15 - Marcus Klang - Finding things in strings” - Machine Learning in Natural Language Processing Will introduce the task of Multilingual Named Entity Recognition & Linking

18/10, 14:05 - Karl Johan Åström - Remember the Physics


18/10, 14:15 - Erik Bylow - Structure from Motion

Programme at a glance

Room: M:2112B

16/10, 10:15 - Ida Arvidsson

16/10, 11:05 - David Gillsjö

16/10, 13:15 - Martin Karlsson

16/10, 14:05 - Fredrik Bagge

17/10, 10:15 - Maximiliam Karl

17/10, 11:05 - Farnaz Adib Yaghmaie

17/10, 13:15 - Aleksis Pirinen

17/10, 14:05 - Erik Gärtner

18/10, 10:15 - Adrianna R. Loback

18/10, 11:05 - Shreya Saxena

18/10, 13:15 - Marcus Klang

18/10, 14:05 - Karl Johan Åström

18/10, 14:15 - Erik Bylow