We present a novel safe reinforcement learning algorithm that exploits the factored dynamics of the environment to become less conservative. We focus on problem settings in which a policy is already running and the interaction with the environment is limited. In order to safely deploy an updated policy, it is necessary to provide a confidence level regarding its expected performance. However, algorithms for safe policy improvement might require a large number of past experiences to become confident enough to change the agent’s behavior. It can achieve a better sample complexity by exploiting independence between features of the environment, but it lacks a confidence level. We study how to improve the sample efficiency of the safe policy improvement with baseline bootstrapping algorithm by exploiting the factored structure of the environment. Our main result is a theoretical bound that is linear in the number of parameters of the factored representation instead of the number of states. The empirical analysis shows that our method can improve the policy using a number of samples potentially one order of magnitude smaller than the flat algorithm.

Citation

  Simão, T. D., & Spaan, M. T. J. (2019). Safe Policy Improvement with Baseline Bootstrapping in Factored Environments. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 4967–4974.

@inproceedings{Simao2019safe,
  author = {Sim{\~a}o, Thiago D. and Spaan, Matthijs T. J.},
  title = {{Safe Policy Improvement with Baseline Bootstrapping in Factored Environments}},
  booktitle = {Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence},
  pages = {4967--4974},
  publisher = {{AAAI} Press},
  year = {2019}
}