Informatics Report Series



Related Pages

Report (by Number) Index
Report (by Date) Index
Author Index
Institute Index

Title:Generating and evaluating evaluative arguments
Authors: Giuseppe Carenini ; Johanna Moore
Date:Aug 2006
Publication Title:Artificial Intelligence
Publication Type:Journal Article Publication Status:Published
Volume No:170(11) Page Nos:925-952
Evaluative arguments are pervasive in natural human communication. In countless situations people attempt to advise or persuade their interlocutors that something is desirable (vs. undesirable) or right (vs. wrong). With the proliferation of on-line systems serving as personal advisors and assistants, there is a pressing need to develop general and testable computational models for generating and presenting evaluative arguments. Previous research on generating evaluative arguments has been characterized by two major limitations. First, researchers have tended to focus only on specific aspects of the generation process. Second, the proposed approaches were not empirically tested. The research presented in this paper addresses both limitations. We have designed and implemented a complete computational model for generating evaluative arguments. For content selection and organization, we devised an argumentation strategy based on guidelines from argumentation theory. For expressing the content in natural language, we extended and integrated previous work in computational linguistics on generating evaluative arguments. The key knowledge source for both tasks is a quantitative model of user preferences. To empirically test critical aspects of our generation model, we have devised and implemented an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. Within the framework, we have performed an experiment to test two basic hypotheses on which the design of the computational model is based; namely, that our proposal for tailoring an evaluative argument to the addressee's preferences increases its effectiveness, and that differences in conciseness significantly influence argument effectiveness. The second hypothesis was confirmed in the experiment. In contrast, the first hypothesis was only marginally confirmed. However, independent testing by other researchers has recently provided further support for this hypothe
Links To Paper
1st Link
Bibtex format
author = { Giuseppe Carenini and Johanna Moore },
title = {Generating and evaluating evaluative arguments},
journal = {Artificial Intelligence},
publisher = {Elsevier},
year = 2006,
month = {Aug},
volume = {170(11)},
pages = {925-952},
doi = {doi:10.1016/j.artint.2006.05.003},
url = {},

Home : Publications : Report 

Please mail <> with any changes or corrections.
Unless explicitly stated otherwise, all material is copyright The University of Edinburgh