MLP 2017-18  |  News Archive  |  Lectures  |  Labs  |  Group Project  |  Coursework  |  Feedback  |  Piazza  |  Github

Machine Learning Practical (MLP) 2017-18: Lectures

Semester 2

All lectures at 11:10 on Wednesdays, George Square Theatre.

  1. Wednesday 17 January 2018. Lecture Cancelled - Apologies.
     
  2. Wednesday 24 January 2018. Introduction to semester 2 miniprojects.
     
  3. Wednesday 31 January 2018. Questions and answers.
     
  4. Wednesday 7 February 2018. Guest lecture: Building Production Machine Learning Systems - Ben Allison, Amazon
     
  5. Wednesday 14 February 2018. Guest lecture: Unsupervised learning of object landmarks from equivariance - Hakan Bilen, University of Edinburgh
     
    Wednesday 21 February 2018. NO LECTURES OR LABS - FLEXIBLE LEARNING WEEK.
     
  6. Wednesday 28 February 2018. Guest lecture: Speech synthesis using LSTM auto-encoders - Vincent Wan, Google
     
  7. Wednesday 7 March 2018. Guest lecture: Subramanian Ramamoorthy - University of Edinburgh
     

Semester 1

Attend one of the following lectures each week.


Lecture recordings.

  1. Wednesday 20 September 2017. Introduction to MLP. Single layer networks (1): linear networks, gradient descent.
    Slides.
    Reading: Goodfellow et al. chapter 1; sections 4.3, 5.1, 5.7
    Lecture recordings.
     
  2. Wednesday 27 September 2017 / Monday 2 October 2017. Single layer networks (2): stochastic gradient descent, minibatches, classification, cross-entropy, softmax.
    Slides.
    Reading: Nielsen chapter 1; Goodfellow et al. sections 5.9, 6.1, 6.2, 8.1
    Lecture recordings.
     
  3. Wednesday 4 October 2017 / Monday 9 October 2017. Deep neural networks (1): hidden layers, back-propagation training, activation functions
    Slides.
    Reading: Nielsen chapter 2; Goodfellow et al. sections 6.3, 6.4,; Bishop sections 3.1, 3.2, chapter 4
    Lecture recordings.
     
  4. Wednesday 11 October 2017 / Monday 16 October 2017. Deep neural networks (2): basics of generalisation, training algorithms, initialisation.
    Slides.
    Reading: Goodfellow et al, sections 5.2, 5.3, 8.3, 8.5, 7.8; Karpathy, CS231n notes (Stanford).
    Additional Reading: Kingma and Ba (2015), Adam: A Method for Stochastic Optimization, ICLR-2015; Glorot and Bengio (2010), Understanding the difficulty of training deep feedforward networks, AISTATS-2010.
    Lecture recordings.
     
  5. Wednesday 18 October 2017 / Monday 23 October 2017. Deep neural networks (3): regularisation and normalisation.
    Slides.
    Reading: Nielsen chapter 3; Goodfellow et al chapter 7 (sections 7.1--7.5, 7.12).
    Additional Reading: Srivastava et al, Dropout: a simple way to prevent neural networks from overfitting, JMLR, 15(1), 1929-1958, 2014; Ioffe and Szegedy, Batch normalization, ICML-2015; Ba et al, Layer Normalization, arXiv:1607.06450.
    Lecture recordings.
     
  6. Wednesday 25 October 2017 / Monday 30 October 2017. Deep neural networks (4): computational graphs and autoencoders
    Slides.
    Reading: Goodfellow et al, sec 6.5, chapter 14 (esp 14.1, 14.2, 14.3, 14.5, 14.9)
    Additional Reading: Olah, Calculus on Computational Graphs: Backpropagation; Kratzert, Understanding the backward pass through Batch Normalization Layer; Vincent et al, Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion, JMLR, 11:3371--3408, 2010.
    Lecture recordings.
     
  7. Wednesday 1 November 2017 / Monday 6 November 2017. Convolutional networks (1): introduction
    Slides.
    Reading: Nielsen, chapter 6; Goodfellow et al, chapter 9 (section 9.1-9.4)
    Additional Reading: LeCun et al, Gradient-Based Learning Applied to Document Recognition, Proc IEEE, 1998; Dumoulin and Visin, A guide to convolution arithmetic for deep learning, arXiv:1603.07285.
    Lecture recordings.
     
  8. Wednesday 8 November 2017 / Monday 13 November 2017. Convolutional networks (2): backprop training and deep convolutional networks
    Slides.
    Reading: Goodfellow et al, chapter 9 (section 9.5-9.11)
    Additional Reading: Krizhevsky et al, ImageNet Classification with Deep Convolutional Neural Networks, NIPS-2012; Simonyan and Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ILSVRC-2014; He et al, Deep Residual Learning for Image Recognition, CVPR-2016.
    Lecture recordings.
     
  9. Wednesday 15 November 2017 / Monday 20 November 2017. Recurrent neural networks (1): Modelling sequential data
    Slides.
    Reading: Goodfellow et al, chapter 10 (sections 10.1, 10.2, 10.3)
    Additional Reading: Mikolov et al, Recurrent Neural Network Based Language Model, Interspeech-2010.
    Lecture recordings.
     
  10. Wednesday 22 November 2017 / Monday 27 November 2017. Recurrent neural networks (2): LSTMs, gates, and some applications
    Slides.
    Reading: Goodfellow et al, chapter 10 (sections 10.4, 10.5, 10.7, 10.10, 10.12)
    Additional Reading: C Olah, Understanding LSTMs; A Karpathy et al (2015), Visualizing and Understanding Recurrent Networks, arXiv:1506.02078; R Srivastava et al, Training Very Deep Networks, NIPS-2015
    Lecture recordings.
     

Semester 2

All lectures at 11am on Wednesdays, George Square Theatre.

You can discuss and ask questions about these lectures on Piazza, or talk to me at one of the "Office" hours in the Appleton Tower cafe (the one on the concourse in front of the lecture theatres):

Textbooks


Copyright (c) University of Edinburgh 2015-2018
The MLP course material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
licence.txt
The software at https://github.com/CSTR-Edinburgh/mlpractical is licensed under the Modified BSD License.
This page maintained by Steve Renals.
Last updated: 2018/01/19 13:37:32UTC


Home : Teaching : Courses : Mlp 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh