Deep Reinforcement Learning (RL) agents are susceptible to adversarial noise in their observations that can mislead their policies and decrease their performance. However, an adversary may be interested not only in decreasing the reward, but also in modifying specific temporal logic properties of the policy. This paper presents a metric that measures the exact impact of adversarial attacks against such properties. We use this metric to craft optimal adversarial attacks. Furthermore, we introduce a model checking method that allows us to verify the robustness of RL policies against adversarial attacks. Our empirical analysis confirms (1) the quality of our metric to craft adversarial attacks against temporal logic properties, and (2) that we are able to concisely assess a system’s robustness against attacks.

Citation

  Gross, D., Simão, T. D., Jansen, N., & Pérez, G. A. (2023). Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking. ICAART, 501–508. https://doi.org/10.5220/0011693200003393

@inproceedings{Gross2023targeted,
  title = {{Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking}},
  author = {Gross, Dennis and Sim{\~a}o, Thiago D. and Jansen, Nils and P{\'e}rez, Guillermo A.},
  booktitle = {{ICAART}},
  pages = {501--508},
  year = {2023},
  doi = {10.5220/0011693200003393}
}