Enhancing User Understanding of Reinforcement Learning Agents Through Visual Explanations

יום שלישי 08.10 13:00 - 14:00

This thesis develops user-centered visual explanation methods for conveying the behavior of AI agents in sequential decision-making. As AI grows in areas like transportation and healthcare, user understanding and trust become essential, especially with regulations like the EU’s GDPR potentially requiring explanations for AI decisions. Focusing on global explanations that reveal an agent’s overall strategy, this work prioritizes visual explanations over textual or rule-based methods due to their clarity in illustrating behavior. Transparency is crucial for trust and collaboration, particularly in safety-critical fields. Both users and developers need to understand AI decision-making to ensure effective use and identify potential flaws. The research explores agent comaprisons, showing where agents make different choices, helping users choose agents aligned with their goals. It also investigates counterfactual explanations, displaying alternative decision paths and their outcomes, deepening user understanding. Interactive explanations allow users to explore specific states, fostering control and trust. New explanation algorithms and user studies assess the effectiveness and usability of these methods. The thesis advances Explainable Reinforcement Learning (XRL) by addressing the need for intuitive, global explanations in sequential decision-making contexts, promoting trust and transparency in human-AI interactions.

Speaker

Yotam Amitai

Technion

  • Advisors Ofra Amir

  • Academic Degree Ph.D