In many realistic planning situations, any policy has a non-zero probability of reaching a dead-end. In such cases, a popular approach is to plan to maximize the probability of reaching the goal. While this strategy increases the robustness and expected autonomy of the robot, it considers that the robot gives up on the task whenever a dead-end is encountered. In this work, we consider planning for agents that pro-actively and autonomously resort to human help when an unavoidable dead-end is encountered (the so-called symbiotic agents). To this end, we develop a new class of Goal-Oriented Markov Decision Process that includes a set of human actions that ensures the existence of a proper policy, one that possibly resorts to human help. We discuss two different optimization criteria: minimizing the probability to use human help and minimizing the expected cumulative cost with a finite penalty for using human help for the first time. We show that for a large enough penalty both criteria are equivalent. We report on experiments with standard probabilistic planning domains for reasonably large problems.

Citation

  Andrés, I., Nunes de Barros, L., Mauá, D. D., & Simão, T. D. (2018). When a Robot Reaches Out for Human Help. Advances in Artificial Intelligence - IBERAMIA, 277–289.

@inproceedings{Andres2018when,
  author = {Andr{\'e}s, Ignasi and {Nunes de Barros}, Leliane and Mau{\'a}, Denis D. and Sim{\~a}o, Thiago D.},
  title = {{When a Robot Reaches Out for Human Help}},
  booktitle = {{Advances in Artificial Intelligence - IBERAMIA}},
  pages = {277--289},
  publisher = {Springer},
  year = {2018}
}