Uncertainty quantification for operators in online reinforcement learning

Publication Name

Knowledge-Based Systems


In online reinforcement learning, operators predict the return by weighting the successors’ estimated value. However, due to the lack of uncertainty quantification, weights assigned by operators are affected by the potentially biased estimations. As a result, the partial order of estimated values is ineffective. To increase the probability of outputting an optimal partial order, this paper introduces the hedonistic expected value (HEV), an upper bound of the return's expectation to quantify the uncertainty. Notably, for compatibility reasons, some complex operators are rewritten as the weighted-sum forms. Based on the weighted-sum form of the operator, the variant Q-learning, namely uncertainty quantification based Q-learning is proposed in this paper. In the proposed algorithm, the weights assigned by HEV of the successors are compatible with the existing operators. The prediction of the return is not only the sum over the weights succeeding the operator but also over the weights following HEV through re-weighting. The greediness of the re-weighted operator is unchanged, and the contraction mapping indicates the convergence can be maintained. We demonstrate that the proposed algorithm with HEV performs favorably in practice.

Open Access Status

This publication is not available as open access



Article Number


Funding Number


Funding Sponsor

Jiangsu Provincial Department of Education



Link to publisher version (DOI)