These lectures introduced HMMs and GMMs and associated algorithms

- Definition of an HMM, and the underlying assumoptions - Markov process, and observation independence
- Output distributions of an HMM
- Background: pdfs, cdfs, and the Gaussian distribution
- Multivariate Gaussian distribution
- Maximum likelihood estimation of the parameters of a multivariate Gaussian (training)
- Clustering and k-means algorithm
- Gaussian mixture models as a soft clustering
- Training GMMs using maximum likelihood and the EM algorithm
- Three problems of HMMs:
- Likelihood - Determine the likelihood of an observation sequence - the forward algorithm
- Decoding - determine the most probable hidden state sequence given an observation sequence - the Viterbi algorithm
- Training - given an observation sequence and an HMM, learn the best HMM parameters - the forward-backward algorithm

- Recursive structure of each of these algorithms, based on the HMM assumptions

This page maintained by Steve Renals.

Last updated: 2017/03/19 19:19:05UTC

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk Please contact our webadmin with any comments or corrections. Logging and Cookies Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh |