Lecture 3, Tuesday w2, 2014-09-23

Main lecture points:

Check your progress

Get familiar with information contents:

Applying the law of large numbers, CLT, and/or Chebyshev’s:

Imagine you have a machine learning system that makes a real-valued prediction (e.g., temperature). You measure the absolute error made on each case in a large test set of size N, and compute the mean absolute error . This estimator is a random variable, it depends on the particular test set that you gathered. If you gathered a new test set, you’d get a different estimate. What can you say about how is distributed (and under what assumptions)? It may be useful to talk about the true mean absolute error m, and its variance σ2, which you might also have to estimate.

That is: do you know how to put a standard error bar on an estimate and know what that means? If you do any experimental work (including numerical experiments) in your project, you’ll probably want to put error bars on some estimates.

We are part way through the ‘week 2’ slides. MacKay pp66–73 gives more detail for the intuitions behind information content.

Ask on NB if anything is unclear, or too compressed.

For keen people

Again, it’s a good idea to try reproducing plots from the slides. Plotting graphs is a useful research skill. And if you have to implement something, it really tests whether you know where the plot came from.


This page maintained by Iain Murray.
Last updated: 2014/09/24 11:57:58


Home : Teaching : Courses : It : 2014 : Log 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh