Informatics Report Series


Report   

EDI-INF-RR-1224


Related Pages

Report (by Number) Index
Report (by Date) Index
Author Index
Institute Index

Home
Title:Recursive Stochastic Games with Positive Rewards
Authors: Kousha Etessami ; Dominik Wojtczak ; Mihalis Yannakakis
Date: 2007
Publication Type:Other Publication Status:Pre-print
Abstract:
We study a class of countable-state zero-sum stochastic games, called (1-exit) Recursive Simple Stochastic Games (1-RSSGs), with strictly positive rewards. This model is a game version of several classic stochastic models, including stochastic context-free grammars and multi-type branching processes. The goal of the two players in the game is to maximize/minimize the total expected reward generated by a play of the game. These games are motivated by the goal of analyzing the optimal/pessimal expected running time of probabilistic procedural programs with potential recursion. We first show that in such games both players have optimal deterministic ``stackless and memoryless'' optimal strategies. We then provide polynomial-time algorithms for computing the exact optimal expected reward (which may be infinite, but otherwise is rational), and optimal strategies, for both the maximizing and minimizing single-player versions of the game, i.e., for (1-exit) Recursive Markov Decision Processes (1-RMDPs). It follows that the quantitative decision problem for positive reward 1-RSSGs is in NP $\cap$ coNP. We show that Condon's well-known quantitative termination problem for finite-state simple stochastic games (SSGs) reduces to a special case of the reward problem for 1-RSSGs, namely, deciding whether the value is $\infty$. Whereas, for finite-state SSGs with strictly positive rewards, deciding if this expected reward value is $\infty$ is solvable in P-time. We also show that there is a simultaneous strategy improvement algorithm that converges in a finite number of steps to the optimal values and strategies of a 1-RSSG with positive rewards. The worst-case complexity of this strategy-improvement algorithm is open, just like its classic version for finite-state SSGs. Optimized versions of our algorithms have been implemented in a tool called PReMo with promising results.
Copyright:
2007 by The University of Edinburgh.
Links To Paper
No links available
Bibtex format
@Misc{EDI-INF-RR-1224,
author = { Kousha Etessami and Dominik Wojtczak and Mihalis Yannakakis },
title = {Recursive Stochastic Games with Positive Rewards},
year = 2007,
}


Home : Publications : Report 

Please mail <reports@inf.ed.ac.uk> with any changes or corrections.
Unless explicitly stated otherwise, all material is copyright The University of Edinburgh