Title: "Quantitative Stochastic Parity Games" Authors: Krishnendu Chatterjee, Marcin Jurdzi{\'n}ski, and Thomas A. Henzinger We study \emph{perfect-information stochastic parity games}. These are two-player nonterminating games which are played on a graph with turn-based probabilistic transitions. A play results in an infinite path and the conflicting goals of the two players are $\omega$-regular path properties, formalized as parity winning conditions. The \emph{qualitative solution} of such a game amounts to computing the set of vertices from which a player has a strategy to win with probability~1 (or with positive probability). The \emph{quantitative solution} amounts to computing the value of the game in every vertex, i.e., the highest probability with which a player can guarantee satisfaction of his own objective in a play that starts from the vertex. For the important special case of one-player stochastic parity games (\emph{parity Markov decision processes}) we give polynomial-time algorithms both for the qualitative and the quantitative solution. The running time of the qualitative solution is $O(d \cdot m^{3/2})$ for graphs with $m$ edges and~$d$~priorities. The quantitative solution is based on a linear-programming formulation. For the two-player case, we establish the existence of optimal pure memoryless strategies. This has several important ramifications. First, it implies that the values of the games are rational. This is in contrast to the \emph{concurrent} stochastic parity games of de Alfaro et al.; there, values are in general algebraic numbers, optimal strategies do not exist, and $\varepsilon$-optimal strategies have to be mixed and with infinite memory. Second, the existence of optimal pure memoryless strategies together with the polynomial-time solution for one-player case implies that the quantitative two-player stochastic parity game problem is in \mbox{NP~$\cap$~co-NP}. This generalizes a result of Condon for stochastic games with reachability objectives. It also constitutes an exponential improvement over the best previous algorithm, which is based on a doubly exponential procedure of de Alfaro and Majumdar for concurrent stochastic parity games and provides only $\varepsilon$-approximations of the values.