We study Markov decision processes (MDPs) where agents have direct control over when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions consist of two components: a control action which affects the environment, and a measurement action which affects what the agent can observe. For solving ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss incurred by the heuristic. To decide whether or not to take a measurement action, we introduce the concept of measuring value. We develop a reinforcement learning algorithm based on the ATM heuristic, using a variant of Dyna-Q modified for partially observable domains, and showcase its superior performance in comparison to prior methods on a number of partially-observable environments.

Citation

  Krale, M., Simão, T. D., & Jansen, N. (2023). Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring. ICAPS, 212–220.

@inproceedings{Krale2023act,
  title = {{Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring}},
  author = {Krale, Merlijn and Sim{\~a}o, Thiago D. and Jansen, Nils},
  year = {2023},
  pages = {212--220},
  booktitle = {{ICAPS}}
}