Lecture 6, Tuesday w4, 2014-10-07 ================================= This lecture finished off symbol codes. Although there's a little theory to do with KL-divergence and convexity left to cover next time. Thing's we covered: * The Kraft--McMillan inequalities * Shannon coding: code lengths by rounding up the information contents. * Huffman coding. Understand it produces optimal and instantaneously decodable symbol codes (and the limitations of that statement). But the proofs that Huffman coding are optimal (linked to from the notes) are non-examinable. Check your progress ------------------- Things you should be able to do after reviewing the lecture: * Explain why a given non-instantaneous symbol code can always be replaced with an instantaneous prefix code. Use the Kraft--McMillan inequalities. * Be able to justify both sides of the Kraft--McMillan inequalities. * Construct a Huffman code for an explicit small ensemble. * Demonstrate that symbol codes can always compress close to the entropy of an ensemble. How close in the worst case? Recommended reading ------------------- We have now done most of the **'week 3' slides**. Material on KL-divergence, Gibbs inequality and convexity will be picked up next time. Ask on NB if anything else is unclear, or too compressed. You should now read all of **MacKay Chapter 5**. The entropy decomposition from last time, and the material we are about to cover on convexity are on pp33--36.