MLP 2018-19  |  News Archive  |  Lectures  |  Labs  |  Group Project  |  Coursework  |  Feedback  |  Computing  |  Piazza  |  Github

Machine Learning Practical (MLP) 2018-19: Lectures

Semester 2

Lectures in semester 2 will be at 11:10 on Wednesdays, again in the Gordon Aikman Lecture Theatre, George Square, starting on Wednesday 16 January. Note that the list of the semester 2 lectures is currently tentative and their slides and videos will be shared before the lectures take place.

  1. Wednesday 16 January 2019. Understanding convolutional neural networks | Hakan.
    Visualisation of convolutional filters and salient image regions.
    Slides.
    Reading: Zeiler & Fergus (2014), Visualizing and Understanding Convolutional Networks, ECCV;
    Additional reading: Simonyan et al (2014), Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, ICLR; Szegedy et al. (2014), , Intriguing properties of neural networks, ICLR; Karpathy, Nice summary of adversarial techniques
     
  2. Wednesday 23 January 2019. Generative adversarial networks (GANs) | Hakan.
    GANs, their optimisation and variations in GANs.
    Slides.
    Reading:Goodfellow et al. (2014), Generative Adversarial Networks, NIPS;
    Additional reading: Goodfellow (2016), NIPS 2016 Tutorial: Generative Adversarial Networks; Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks
     
  3. Wednesday 30 January 2019. Domain adaptation and transfer learning | Hakan.
    Recent methods in domain adaptation, transfer learning.
    Slides.
    Reading:Tzeng et al. (2015), Simultaneous Deep Transfer Across Domains and Tasks, ICCV;
    Additional reading: Li et al (2018), Explicit Inductive Bias for Transfer Learning with Convolutional Networks, ICML; Yosinski et al (2014) , How transferable are features in deep neural networks?, NIPS
    Introduction to MLP GPU cluster.
     
  4. Wednesday 6 February 2019. An introduction to neural network compression | Elliot J. Crowley.
    Recent methods in network compression, pruning and knowledge distillation.
    Slides.
    Reading:Sergey and Komodakis (2017), Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer, ICLR;
    Additional reading: Howard et al (2017), MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, arXiv; Crowley et al (2019) , Pruning neural networks: is it time to nip it in the bud?, arXiv.
     
  5. Wednesday 13 February 2019. Object detection and semantic segmentation | Hakan.
    Recent methods in object detection (Faster-RCNN) and segmentation (FCN).
    Slides.
    Reading: Long et al (2015), Fully Convolutional Networks for Semantic Segmentation, ICCV.
    Additional reading: Ren et al (2017), Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, PAMI.
     
  6. Wednesday 27 February 2019. Language and vision models | Hakan.
    Recent methods in image captioning and visual question answering.
    Slides.
    Reading: Vinyals et al (2015), Show and Tell: A Neural Image Caption Generator, CVPR.
    Additional reading: Antol et al (2015), VQA: Visual Question Answering, ICCV.
     
  7. Wednesday 13 March 2019. Video analytics | Hakan.
    Recent methods in video classification (3d convolutional networks, two-stream networks).
    Slides.
    Reading: Ng et al (2015), Beyond short snippets: Deep networks for video classification, CVPR.
    Additional reading: Tran et al (2018), A Closer Look at Spatiotemporal Convolutions for Action Recognition, CVPR.  

Semester 1

Lectures in semester 1 will be at 15:10 on Tuesdays in the Gordon Aikman Lecture Theatre, George Square (formerly called George Square Theatre).
This article explains who Gordon Aikman was.

Lectures in semester 2 will be at 11:10 on Wednesdays, again in the Gordon Aikman Lecture Theatre, George Square.

The first lecture will take place on Tuesday 18 September at 15:10.


Lecture recordings.

  1. Tuesday 18 September 2018. Introduction to MLP and Single layer networks (1) | Steve.
    Course overview; Linear networks; Gradient descent.
    Slides.
    Lecture recordings.
    Reading: Goodfellow et al. chapter 1; sections 4.3, 5.1, 5.7
    Labs: Lab 1 (Introduction); Lab 2 (Single Layer Networks)
     
  2. Tuesday 25 September 2018. Single layer networks (2) | Steve.
    Stochastic gradient descent; Minibatches; Classification, cross-entropy, and softmax.
    Slides.
    Lecture recordings.
    Reading: Nielsen chapter 1; Goodfellow et al. sections 5.9, 6.1, 6.2, 8.1
     
  3. Tuesday 2 October 2018. Deep neural networks (1) | Steve.
    Hidden layers; Back-propagation training
    Slides.
    Lecture recordings.
    Reading: Nielsen chapter 2; Goodfellow et al. Sections 6.3 and 6.4; Bishop sections 3.1, 3.2, chapter 4
    Labs: Lab 3 (Multi-layer Networks)
     
  4. Tuesday 9 October 2018. Deep neural networks (2) | Steve
    Tanh & ReLU layers; Generalisation & regularisation.
    (NB: this lecture contains the material on tanh and ReLU originally planned for lecture 3; the material on computational graphs will now be in lecture 5)
    Slides.
    Lecture recordings.
    Reading: Nielsen chapter 3 (section on overfitting and regularization); Goodfellow et al: Sections 5.2 and 5.3, chapter 7 (sections 7.1--7.5, 7.8).
    Labs: Lab 4 (Generalisation and overfitting); Lab 5 (Regularisation).
     
  5. Tuesday 16 October 2018. Deep neural networks (3) | Steve.
    Computational graphs; Learning algorithms; coursework 1.
    Slides.
    Lecture recordings.
    (NB: the material on initialisation will be covered in lecture 6)
    Reading: Goodfellow et al, Sections 6.5, 8.3,8.5; Olah, Calculus on Computational Graphs: Backpropagation; Karpathy, CS231n notes (Stanford).
    Additional Reading: Kingma and Ba (2015), Adam: A Method for Stochastic Optimization, ICLR-2015.
     
  6. Tuesday 23 October 2018. Deep neural networks (4) | Steve.
    Dropout; Initialisation; Normalisation; (Pretraining and autoencoders).
    Slides.
    Lecture recordings.
    (NB: there is some material on pretraining and autoencoders in the slides which will not be covered in the lecture)
    Reading: Goodfellow et al, 7.12, 8.4; 8.7.1; [chapter 14 (esp 14.1, 14.2, 14.3, 14.5, 14.9)]
    Additional Reading: Srivastava et al, Dropout: a simple way to prevent neural networks from overfitting, JMLR, 15(1), 1929-1958, 2014; Glorot and Bengio (2010), Understanding the difficulty of training deep feedforward networks, AISTATS-2010; Ioffe and Szegedy, Batch normalization, ICML-2015; Kratzert, Understanding the backward pass through Batch Normalization Layer; (Vincent et al, Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion, JMLR, 11:3371--3408, 2010).
     
  7. Tuesday 30 October 2018. Convolutional networks (1) | Hakan.
    Introduction
    Slides.
    Reading: Nielsen, chapter 6; Goodfellow et al, chapter 9 (section 9.1-9.4)
    Additional Reading: LeCun et al, Gradient-Based Learning Applied to Document Recognition, Proc IEEE, 1998; Dumoulin and Visin, A guide to convolution arithmetic for deep learning, arXiv:1603.07285.
     
  8. Tuesday 6 November 2018. Convolutional networks (2) | Hakan.
    Backprop training and deep convolutional networks
    Slides.
    Reading: Goodfellow et al, chapter 9 (section 9.5-9.11)
    Additional Reading: Krizhevsky et al, ImageNet Classification with Deep Convolutional Neural Networks, NIPS-2012; Simonyan and Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ILSVRC-2014; He et al, Deep Residual Learning for Image Recognition, CVPR-2016.
     
  9. Tuesday 13 November 2018. Recurrent neural networks (1) | Steve.
    Modelling sequential data
    Slides.
    Reading: Goodfellow et al, chapter 10 (sections 10.1, 10.2, 10.3)
    Additional Reading: Mikolov et al, Recurrent Neural Network Based Language Model, Interspeech-2010.
     
  10. Tuesday 20 November 2018. Recurrent neural networks (2) | Hakan.
    LSTMs, gates, and some applications
    Slides.
    Reading: Goodfellow et al, chapter 10 (sections 10.4, 10.5, 10.7, 10.10, 10.12)
    Additional Reading: C Olah, Understanding LSTMs; A Karpathy et al (2015), Visualizing and Understanding Recurrent Networks, arXiv:1506.02078; R Srivastava et al, Training Very Deep Networks, NIPS-2015
     
  11. Tuesday 27 November 2018. Introduction to semester 2 projects | Steve.
    Slides.
     

You can discuss and ask questions about these lectures on Piazza, or talk to the lecturer at one of the Office hours (Tuesdays 4pm, Informatics Forum Atrium - ground floor - in the cafe-style area).

ML-Base: Monday-Friday 5-6pm in AT-7.03 (the InfBase room), starting Monday 8 October. ML-Base will have a tutor to answer questions to do with the machine learning courses (MLP / MLPR / IAML); it is also a time and place to drop in to work and discuss problems, and meet people taking machine learning classes. (If you have questions about the specific software frameworks used in the courses, these are best asked in the lab sessions for the course.)

Textbooks


Copyright (c) University of Edinburgh 2015-2018
The MLP course material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
licence.txt
The software at https://github.com/CSTR-Edinburgh/mlpractical is licensed under the Modified BSD License.
This page maintained by Steve Renals.
Last updated: 2019/09/16 15:42:09UTC


Home : Teaching : Courses : Mlp 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh