- Abstract:
-
This report details work done in the early phases of the TALK project to integrate ``Information State Update'' (ISU)-based dialogue systems with techniques from machine learning (e.g. memory-based learning, reinforcement learning). We explain how to use data gathered from information states of ISU dialogue managers in a variety of different learning techniques. We present initial results showing that various
classifier learning techniques over ISU representations can reduce error rates by over 50%. Ultimately we focus on Reinforcement Learning (RL) techniques for automatic optimisation of dialogue strategies. This has involved constructing an automatic annotation system which builds ISU state representations for the COMMUNICATOR corpora of human-machine dialogues, and then developing novel techniques for learning from this data. We describe a learning method based on linear function approximation (in order to deal with the very large state spaces generated by the ISU approach), and a hybrid method which combines supervised learning and reinforcement learning. We also describe 2 different types of user simulation that we have built from the COMMUNICATOR data for testing
purposes. Our initial results are encouraging. We show that over a run of 1000 simulated dialogues our learned policy has an average reward (computed as a function of the task completion and dialogue length) of over twice the average score of the COMMUNICATOR systems, and more than a third greater than the best of the COMMUNICATOR systems. We also report on progress in a baseline system for in-car dialogues and describe plans for future research.
- Links To Paper
- 1st Link
- Bibtex format
- @Misc{EDI-INF-RR-0615,
- author = {
Oliver Lemon
and Kallirroi Georgila
and James Henderson
and Malte Gabsdil
and Ivan Meza-Ruiz
and Steve Young
},
- title = {Integration of Learning and Adaptivity with the Information State Update approach},
- year = 2005,
- month = {Jan},
- url = {http://www.talk-project.org},
- note = {TALK Project},
- }
|