University of Wollongong
Browse

Uncertainty quantification for operators in online reinforcement learning

journal contribution
posted on 2024-11-17, 16:42 authored by Bi Wang, Jianqing Wu, Xuelian Li, Jun Shen, Yangjun Zhong
In online reinforcement learning, operators predict the return by weighting the successors’ estimated value. However, due to the lack of uncertainty quantification, weights assigned by operators are affected by the potentially biased estimations. As a result, the partial order of estimated values is ineffective. To increase the probability of outputting an optimal partial order, this paper introduces the hedonistic expected value (HEV), an upper bound of the return's expectation to quantify the uncertainty. Notably, for compatibility reasons, some complex operators are rewritten as the weighted-sum forms. Based on the weighted-sum form of the operator, the variant Q-learning, namely uncertainty quantification based Q-learning is proposed in this paper. In the proposed algorithm, the weights assigned by HEV of the successors are compatible with the existing operators. The prediction of the return is not only the sum over the weights succeeding the operator but also over the weights following HEV through re-weighting. The greediness of the re-weighted operator is unchanged, and the contraction mapping indicates the convergence can be maintained. We demonstrate that the proposed algorithm with HEV performs favorably in practice.

Funding

Jiangsu Provincial Department of Education (2022205200100595)

History

Journal title

Knowledge-Based Systems

Volume

258

Language

English

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC