Lecture 10, Tuesday w6, 2014-10-21

Things we covered:

Next time we’ll carry on talking about making predictions in a context, combining the predictions from contexts of different sizes.

Check your progress

Do you think the Dirichlet parameters for something like characters or words from language should be large or small? Why?

Explain how setting pseudo-counts to zero in the Beta-Binomial and Dirichlet-Multinomial models would break an arithmetic coding scheme.

We’ve now done the ‘week 5’ slides except PPM which I’ll cover next time. Mark anything that’s unclear or that needs expanding on NB.

Extra reading

If keen, you could read Section 28.3, pp351–353 of MacKay. This section discusses ‘two-part codes’, which send parameters then an encoding in more detail. The ‘bits back’ method is an ingenious way of getting around the inefficiency of ‘sending the parameters twice’. Any extra details in these pages (not mentioned in lectures) are all non-examinable.


This page maintained by Iain Murray.
Last updated: 2014/10/21 16:44:12


Home : Teaching : Courses : It : 2014 : Log 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh