- Abstract:
-
A human-computer dialogue system should be able to handle spontaneous ungrammatical sentences in the analysis-recognition stage and at the same time generate grammatically correct responses. Moreover, the constraint of not having enough domain-dependent training material should be overcome. In this paper we present a method able to cope with the above phenomena. It is an incremental data-driven process based on the Viterbi algorithm, which given as input a set of sentences, produces a Hidden Markov Model (HMM) that incorporates grammatical structure provided by large context dependencies as well as coverage of ungrammatical spontaneous sentences provided by statistical estimations. Furthermore, the algorithm generalises from the sample data, thereby reducing the required amount of training samples for acquiring reliable models. Adjustment of parameters may lead to a model where stochastic features supersede grammatical structure and the contrary. In the first case, the output HMM can be used as a robust flexible language model in the analysis-recognition stage. In the second case, the HMM becomes appropriate for generating valid system responses without the need of grammars.
- Links To Paper
- 1st link
- Bibtex format
- @Article{EDI-INF-RR-1124,
- author = {
Kallirroi Georgila
and Nikos Fakotakis
and George Kokkinakis
},
- title = {Stochastic Language Modelling for Recognition and Generation in Dialogue Systems},
- journal = {Traitement automatique des langues (TAL)},
- publisher = {Association pour le traitement automatique des langues},
- year = 2003,
- month = {Mar},
- volume = {43},
- pages = {129-154},
- url = {http://homepages.inf.ed.ac.uk/kgeorgil/papers/georgila_tal02.pdf},
- }
|