**Archive of old course material:** See the current course page.

Information theory describes the fundamental limits on our ability to store, process and communicate data, whether in natural or artificial systems. Understanding and approaching these limits is important in a wide variety of topics in informatics, including compression, communication and learning from data.

**Lectures**: Tuesdays and Fridays both at 14:00–14:50

**Lecturer**: Iain Murray

**Tutorials**: started week 3. See below for
details.

Registration
Guidelines for PhD Students wishing to take an Informatics Taught Course.
Short version: **PhD students auditing the course, please send me an
email!**

This was a new course for 2010, so course materials were updated more than usual as lectures progressed. Updates and other announcements appeared on this page’s RSS feed.

**2011-04-25:** Mock exam paper answers now available.

If you feel uncomfortable sending constructive criticism directly to the
lecturer (although *really* you can), you can send anonymous comments via
the class representative. For 2010 our generous volunteer was: Christopher Monk,
`C.R.Monk` at the usual `sms.ed.ac.uk`.

If you would like feedback from me at any time, do ask for it. The aim is to address any of your concerns during the tutorials, but let me know if this isn’t working out for you.

You should have been allocated a tutorial slot on one of: Thursdays 10am,
Fridays 3pm, both in Appleton
Tower 5.07. Room 5.07 is in the far left corner of room 5.05 (a computer lab).
You can check the tutorial allocations
made by the ITO, although the page is only available from
`.inf.ed.ac.uk`, e.g., DICE machines.

Questions will be set for each of the tutorials, which will be important practice and check whether you are really keeping up with the material. The tutorial work forms no part of your grade: take it seriously to stay on top of the material, but don’t be afraid to let me know when you can’t do the questions or needed significant help from others. If you are ever struggling, someone else will be too, and I can do something about it if I know.

**You must do the tutorial questions before the corresponding
tutorial!** An exceedingly dim view is taken if you haven’t
even read the questions and begun to think about them.

**Tutorial 1:**Questions (PDF). C and Matlab code for bonus question.**Tutorial 2:**Questions (PDF).**Tutorial 3:**Questions (PDF).**Tutorial 4:**Questions (PDF).**Tutorial 5:**Questions (PDF).

The assessed assignment was made
available on 4 November and is **due Thursday 25 November, 4pm**.

Answering some of the assignment questions will require some programming in
the language of your choice, the code itself will not be assessed. Programming
issues special to this course will be discussed in lectures and tutorials, but
you will *not* be taught a programming language. You must be prepared to
write code on your own to take this course.

As there are no past papers, a mock exam has been posted.

The recommended course text is:
*Information
Theory, Inference, and Learning Algorithms*,
David MacKay, CUP 2003.
Available for purchase in hard-copy or freely downloadable as a PDF. Some of the
lecture notes make cross-references to this text, and some tutorial exercises
will come from it.

Those who like to absorb material through a series of theorems and lemmas may prefer
*Elements of
Information Theory* (Second Ed.), Cover and Thomas, Wiley 2006.

You will need to know most of the mathematics in my cribsheet (2-up), which was made for another course. And have a thorough understanding of logarithms!

Shannon’s seminal paper and UCSD’s video on the legacy of Claude Shannon.

Some programming notes relevant to the course.

More information on the course and its motivation can be found in the official course catalogue entry and the original case for support.

Each item below corresponds roughly to a week of lectures. The relevant chapters refer to the recommended course text.

The lecture notes are slide sets with some additional denser notes. In the lectures I will present some of the material on the whiteboard instead of using the slides. There are “4-up” and “9-up” contact sheets of the slides to save paper when printing. If you want to format these differently you could tweak the make4up/make9up shell scripts that made them (or try the GUI print settings in your PDF viewer).

**All slides/notes in a single PDF:**
1-up, 4-up, 9-up.

**Introduction to Information Theory:**digital communication, error correcting codes, representing data, compression, counting large sets.

*The magnetic media inside hard-disks frequently corrupts bits. How is reliable storage possible? How should we go about making the most efficient use of the bits available?*

Related: Chapter 1. Starting to look at material from chapters 2 and 4.

Slides, 4-up, 9-up.**Measuring and representing information.**Probabilities over many variables. The typical set. Information content. Entropy.

*What**is*“information”? Quantifying information and knowing its mathematical properties should tell us something about the resources we need to store it and move it around, and what can be achieved with the information we have.

Chapters 2, 4.

Slides, 4-up, 9-up.**Practical compression 1: symbol codes**Symbol codes. Huffman codes.

*How can we compress (close to) the entropy of a source in practice? We start with symbol-coding, which gives the answer in some restricted circumstances*.

Chapters 2, 5.

Slides, 4-up, 9-up.**Practical compression 2: stream codes.**Probabilistic and Human prediction. Arithmetic compression. Dasher.

We move onto state-of-the-art compression. How can we overcome all(!) the limitations of the symbol codes we have considered so far: use more powerful models and group large numbers of symbols together.

Chapter 6.

Slides, 4-up, 9-up.**Modelling and inference for stream codes:**Beta and Dirichlet predictions. Models for text and images. PPM.

Information theory shows that compression and (Bayesian) probabilistic methods are intimately linked*in theory*. How can we build models for compression?

Chapters 3, 6.

Slides, 4-up, 9-up.**Communication channels and information:**A framework for communication. Redundancy. Models of channels. Back channels. Information theoretic quantities.

Chapters 1, 2, 8, 9, 10.

Slides, 4-up, 9-up.**Noisy channel coding**. Check digits. Block codes on extended channels; Hamming Codes. The noisy channel coding theorem. Erasure channels and digital fountain codes.

Chapters 1, 9, 10, 50.

Slides, 4-up, 9-up.**Theory review and sparse graph codes**. Review of information theory proofs. Low-density-parity-check codes for the binary symmetric channel and belief propagation.

Chapters 10, 13, 26, 47.

Slides, 4-up, 9-up.**Sending less than the information content: rate distortion, hash codes.**

If you can only send N/3 bits, can you do better than dropping 2N/3 bits of your file on the floor? How do web browsers alert you to bad sites without you needing a list of the bad sites, or telling a server which site you are visiting? What are some active research topics?

Chapter 10, 12.

Slides, 4-up, 9-up.

This page maintained by Iain Murray.

Last updated: 2012/07/06 14:31:01

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk Please contact our webadmin with any comments or corrections. Logging and Cookies Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh |