Scalable Safe Policy Improvement via Monte Carlo Tree Search
Algorithms for safely improving policies are important to deploy reinforcement learning approaches in real-world scenarios. In this work, we propose an algorithm, called MCTS-SPIBB, that computes safe policy improvement online using a Monte Carlo Tree Search based strategy. We theoretically prove that the policy generated by MCTS-SPIBB converges, as the number of simulations grows, to the optimal safely improved policy generated by Safe Policy Improvement with Baseline Bootstrapping (SPIBB), a popular algorithm based on policy iteration. Moreover, our empirical analysis performed on three standard benchmark domains shows that MCTS-SPIBB scales to significantly larger problems than SPIBB because it computes the policy online and locally, i.e., only in the states actually visited by the agent.
Citation
Castellini, A., Bianchi, F., Zorzi, E., Simão, T. D., Farinelli, A., & Spaan, M. T. J. (2023). Scalable Safe Policy Improvement via Monte Carlo Tree Search. ICML, 3732–3756.