MLP 2019-20
| News Archive
| Lectures
| Labs
| Group Project
| Coursework
| Feedback
| Computing
| Piazza
| Github
Machine Learning Practical (MLP) 2019-20: Lectures
Semester 1
Lectures in semester 1 will be at 15:10 on Tuesdays in the Gordon Aikman Lecture Theatre, George Square (formerly called George Square Theatre).
This article explains who Gordon Aikman was.
The lectures are recorded and the video lectures can be found in Learn.
Lectures in semester 2 will be at 13:10 on Tuesdays, again in the Gordon Aikman Lecture Theatre, George Square.
The first lecture will take place on Tuesday 17 September at 15:10.
Lecture recordings.
-
Tuesday 17 September 2019. Introduction to MLP and Single layer networks (1).
Course overview; Linear networks; Gradient descent.
Slides.
Reading:
Goodfellow et al. chapter 1; sections 4.3, 5.1, 5.7
Labs:
Lab 1 (Introduction);
Lab 2 (Single Layer Networks)
-
Tuesday 24 September 2019. Single layer networks (2).
Stochastic gradient descent; Minibatches; Classification, cross-entropy, and softmax.
Slides.
Reading:
Nielsen chapter 1;
Goodfellow et al. sections 5.9, 6.1, 6.2, 8.1
-
Tuesday 1 October 2019. Deep neural networks (1).
Hidden layers; Back-propagation training; Tanh
Slides.
Reading:
Nielsen chapter 2;
Goodfellow et al. Sections 6.3 and 6.4;
Bishop sections 3.1, 3.2, chapter 4
Labs:
Lab 3 (Multi-layer Networks)
-
Tuesday 8 October 2019. Deep neural networks (2)
ReLU layers; Generalisation & regularisation.
Slides.
Reading:
Nielsen chapter 3 (section on overfitting and regularization);
Goodfellow et al: Sections 5.2 and 5.3, chapter 7 (sections 7.1--7.5, 7.8).
Labs:
Lab 4 (Generalisation and overfitting);
Lab 5 (Regularisation).
-
Tuesday 15 October 2019. Deep neural networks (3).
Computational graphs; Learning algorithms; coursework 1.
Slides.
Reading:
Goodfellow et al, Sections 6.5, 8.3,8.5;
Olah, Calculus on Computational Graphs: Backpropagation;
Karpathy, CS231n notes (Stanford).
Additional Reading:
Kingma and Ba (2015), Adam: A Method for Stochastic Optimization, ICLR-2015.
-
Tuesday 22 October 2019. Deep neural networks (4).
Initialisation; Normalisation; Dropout; (Pretraining and autoencoders).
Slides.
(NB: there is some material on pretraining and autoencoders in the slides which will not be covered in the lecture)
Reading:
Goodfellow et al, 7.12, 8.4;
8.7.1;
[chapter 14 (esp 14.1, 14.2, 14.3, 14.5, 14.9)]
Additional Reading:
Srivastava et al, Dropout: a simple way to prevent neural networks from overfitting, JMLR, 15(1), 1929-1958, 2014;
Glorot and Bengio (2010), Understanding the difficulty of training deep feedforward networks, AISTATS-2010;
Ioffe and Szegedy, Batch normalization, ICML-2015;
Kratzert, Understanding the backward pass through Batch Normalization Layer;
(Vincent et al, Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion, JMLR, 11:3371--3408, 2010).
-
Tuesday 29 October 2019. Convolutional networks (1).
Introduction
Slides.
Reading:
Nielsen, chapter 6;
Goodfellow et al, chapter 9 (section 9.1-9.4)
Additional Reading:
LeCun et al, Gradient-Based Learning Applied to Document Recognition, Proc IEEE, 1998; Dumoulin and Visin, A guide to convolution arithmetic for deep learning, arXiv:1603.07285.
-
Tuesday 6 November 2018. Convolutional networks (2) | Hakan.
Backprop training and deep convolutional networks
Slides.
Reading:
Goodfellow et al, chapter 9 (section 9.5-9.11)
Additional Reading:
Krizhevsky et al, ImageNet Classification with Deep Convolutional Neural Networks, NIPS-2012; Simonyan and Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ILSVRC-2014; He et al, Deep Residual Learning for Image Recognition, CVPR-2016.
-
Tuesday 12 November 2019. Recurrent neural networks (1).
Modelling sequential data
Slides.
Reading:
Goodfellow et al, chapter 10 (sections 10.1, 10.2, 10.3)
Additional Reading:
Mikolov et al, Recurrent Neural Network Based Language Model, Interspeech-2010.
-
Tuesday 19 November 2019. Recurrent neural networks (2).
LSTMs, gates, and some applications
Slides.
Reading:
Goodfellow et al, chapter 10 (sections 10.4, 10.5, 10.7, 10.10, 10.12)
Additional Reading:
C Olah, Understanding LSTMs;
A Karpathy et al (2015), Visualizing and Understanding Recurrent Networks, arXiv:1506.02078;
Bahdanau et al, Neural Machine Translation by Jointly Learning to Align and Translate, NIPS-2016
-
MLP Group Project Guide .
Notes.
Semester 2
-
Tuesday 14 January 2020. Introduction to MLP cluster | Antreas
Slides.
-
Tuesday 21 January 2020. Guest lecture: Customer Focused Science | Ben Allison, Amazon
This lecture will not be recorded.
-
Tuesday 28 January 2020. Guest lecture: An Introduction to Neural Architecture Search | Elliot J. Crowley, Informatics
Slides.
-
Tuesday 4 February 2020. Self-Supervised Monocular Depth Estimation | Oisin Mac Aodha, Informatics
Slides.
-
Tuesday 11 February 2020. Generative adversarial networks (GANs) | Hakan.
GANs, their optimisation and variations in GANs.
Slides.
Reading:Goodfellow et al. (2014), Generative Adversarial Networks, NIPS;
Additional reading: Goodfellow (2016), NIPS 2016 Tutorial: Generative Adversarial Networks; Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks
-
Tuesday 18 February 2020. No lecture.
-
Tuesday 25 February 2020. No lecture.
-
Tuesday 3 March 2020. Eric Sodomka, Facebook.
You can discuss and ask questions about these lectures on Piazza, or talk to the lecturer at one of the Office hours (Tuesdays 4pm, Informatics Forum Atrium - ground floor - in the cafe-style area).
ML-Base: Monday-Friday 5-6pm in AT-7.03 (the InfBase room), starting Monday 8 October. ML-Base will have a tutor to answer questions to do with the machine learning courses (MLP / MLPR / IAML); it is also a time and place to drop in to work and discuss problems, and meet people taking machine learning classes. (If you have questions about the specific software frameworks used in the courses, these are best asked in the lab sessions for the course.)
Textbooks
- Introductory:
Michael Nielsen, Neural Networks and Deep Learning, 2016. This free online book has excellent coverage of feed-forward networks, training by back-propagation, error functions, regularisation, and convolutional neural networks. It uses the MNIST data as a running example.
- Comprehensive:
Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, 2016, MIT Press. Full-text of the book is available at the authors' web site.
- Older, but worthwhile reading:
Christopher M Bishop, Neural Networks for Pattern Recognition, 1995, Clarendon Press.
-
Pattern Recognition and Machine Learning by Chris Bishop is now available as a free PDF download from Microsoft
Copyright (c) University of Edinburgh 2015-2019
The MLP course material is licensed under the
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
licence.txt
The software at https://github.com/CSTR-Edinburgh/mlpractical is licensed under the Modified BSD License.
This page maintained by Hakan Bilen.
Last updated: 2020-02-23 20:57:25UTC