ASR 2024-25  |  News Archive  |  Lectures  |  Labs  |  Coursework  |  Piazza

Automatic Speech Recognition (ASR) 2024-25: Lectures

There are 18 lectures, taking place in weeks 1-9. Lectures are held on Mondays and Thursdays at 14:10, starting Monday 13 January. All lectures will be held in Lecture Theatre 2.35 in the Edinburgh Futures Institute.

Lecture live streaming is available via Media Hopper Replay for students not able to attend in person – the link can be found on Learn under “Course Materials”.

All listed reading is optional and will not be examinable. Works listed as Reading may be useful to improve your understand of the lecture content; Background reading is for interest only.

  1. Monday 13 January 2025. Introduction to Speech Recognition
    Slides
    Reading: J&M: chapter 7, section 9.1; R&H review chapter (sec 1).
     
  2. Thursday 16 January 2025. Speech Signal Analysis 1
    Slides
    Reading: O'Shaughnessy (2000), Speech Communications: Human and Machine, chapter 2; J&M: Sec 9.3; Paul Taylor (2009), Text-to-Speech Synthesis: Ch 10 and Ch 12.
    SparkNG MATLAB realtime/interactive tools for speech science research and education
     
  3. Monday 20 January 2025. Speech Signal Analysis 2
    Slides
    Reading: O'Shaughnessy (2000), Speech Communications: Human and Machine, chapter 3-4
     
  4. Thursday 23 January 2025. Introduction to Hidden Markov Models
    Slides
    Reading: Rabiner & Juang (1986) Tutorial.; J&M: Secs 6.1-6.5, 9.2, 9.4; R&H review chapter (sec 2.1, 2.2);
     
  5. Monday 27 January 2025. HMM algorithms
    Slides and introduction to the labs
    Reading: J&M: Sec 9.7, G&Y review (sections 1, 2.1, 2.2); (J&M: Secs 9.5, 9.6, 9.8 for introduction to decoding).
     
  6. Thursday 30 January 2025. Gaussian mixture models
    Slides
    Reading: R&H review chapter (sec 2.2)
     
  7. Monday 3 February 2025. HMM acoustic modelling 3: Context-dependent phone modelling
    Slides
    Reading: J&M: Sec 10.3; R&H review chapter (sec 2.3); Young (2008).
     
  8. Thursday 6 February 2025. Large vocabulary ASR
    Slides (updated 6 Feb; errata)
    Background reading: Ortmanns & Ney, Young (sec 27.2.4)
     
  9. Monday 10 February 2025. ASR with WFSTs
    Slides
    Reading: Mohri et al (2008), Speech recognition with weighted finite-state transducers, in Springer Handbook of Speech Processing (sections 1 and 2)
     
  10. Thursday 13 February 2025. Hybrid acoustic modelling with neural networks
    Slides (updated 18 Feb; errata)
    Background Reading: Morgan and Bourlard (May 1995). Continuous speech recognition: Introduction to the hybrid HMM/connectionist approach, IEEE Signal Processing Mag., 12(3):24-42
    Mohamed et al (2012). Understanding how deep belief networks perform acoustic modelling, ICASSP-2012.
     
    Monday 17 - Friday 21 February 2025.
    NO LECTURES OR LABS - FLEXIBLE LEARNING WEEK.
     
  11. Monday 24 February 2025. Neural network architectures for ASR
    Slides
    Reading: Maas et al (2017), Building DNN acoustic models for large vocabulary speech recognition Computer Speech and Language, 41:195-213.
    Background reading: Peddinti et al (2015). A time delay neural network architecture for efficient modeling of long temporal contexts, Interspeech-2015
    Graves et al (2013), Hybrid speech recognition with deep bidirectional LSTM, ASRU-2013.
     
  12. Thursday 27 February 2025. Speaker Adaptation
    Slides(updated 3 Mar; errata)
    Reading: G&Y review, sec. 5
    Woodland (2001), Speaker adaptation for continuous density HMMs: A review, ISCA Workshop on Adaptation Methods for Speech Recognition
    Bell et al (2021), Adaptation Algorithms for Neural Network-Based Speech Recognition: An Overview , IEEE Open Journal of Signal Processing, Vol 2:33-36.
     
  13. Monday 3 March 2025. Connectionist temporal classification
    Slides
    Reading: A Hannun et al (2014), Deep Speech: Scaling up end-to-end speech recognition, ArXiV:1412.5567.
    A Hannun (2017), Sequence Modeling with CTC, Distill.
    Background Reading: Y Miao et al (2015), EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding, ASRU-2105.
    A Maas et al (2015). Lexicon-free conversational speech recognition with neural networks, NAACL HLT 2015.
      Reading: Sec 27.3.1 of Young (2008), HMMs and Related Speech Recognition Technologies.
     
  14. Thursday 6 March 2025. Encoder-decoder models 1: the RNN transducer
    Slides (updated 29 Apr; errata)
    Background reading: Alex Graves (2012), Sequence Transduction with Recurrent Neural Networks, International Conference of Machine Learning (ICML) 2012 Workshop on Representation Learning
     
  15. Monday 10 March 2025 Guest lecture: Multilingual and low-resource speech recognition
    Slides
    Background reading: Besaciera et al (2014), Automatic speech recognition for under-resourced languages: A survey, Speech Communication, 56:85--100.
    Huang et al (2013). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers, ICASSP-2013.
     
  16. Thursday 13 March 2025. Encoder-decoder models 2: attention-based models
    Slides
    Reading: W Chan et al (2015), Listen, attend and spell: A neural network for large vocabulary conversational speech recognitionICASSP.
    R Prabhavalkar et al (2017), A Comparison of Sequence-to-Sequence Models for Speech Recognition, Interspeech.
    Background Reading: C-C Chiu et al (2018), State-of-the-art sequence recognition with sequence-to-sequence models, ICASSP.
    S Watanabe et al (2017), Hybrid CTC/Attention Architecture for End-to-End Speech Recognition, IEEE STSP, 11:1240--1252.
     
  17. Monday 17 March 2025. Self-supervised learning for speech
    Slides
    Background Reading: Baevski et al. (2020), wav2vec 2.0: A framework for self-supervised learning of speech representations, NeurIPS
    Hsu et al. (2021), HuBERT: Self-supervised speech representation learning by masked prediction of hidden units, IEEE/ACM Transactions on Audio, Speech, and Language Processing
    A van den Ooord et al (2018), Representation learning with contrastive predictive coding
     
  18. Thursday 20 March 2025. ASR with Large Language Models
    Slides
    Background Reading: Zhang et al. (2023), SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities, Findings of EMNLP
    Wu et al. (2023), On decoder-only architecture for speech-to-text and large language model integration, ASRU
     
  19. Revision tutorials (various time slots)
    Slides


     

Reading

All listed reading is optional and will not be examinable. Works listed as Reading may be useful to improve your understand of the lecture content; Background reading is for interest only.

Textbook

Review and Tutorial Articles

Other supplementary materials


Copyright (c) University of Edinburgh 2015-2025
The ASR course material is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
licence.txt
This page maintained by Peter Bell.
Last updated: 2025/04/29 16:25:46UTC


Home : Teaching : Courses : Asr 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh