https://www.selleckchem.com/products/hppe.html
Deep reinforcement learning (RL) has recently led to many breakthroughs on a range of complex control tasks. However, the decision-making process is generally not transparent. The lack of interpretability hinders the applicability in safety-critical scenarios. While several methods have attempted to interpret vision-based RL, most come without detailed explanation for the agent's behaviour. In this paper, we propose a self-supervised interpretable framework, which can discover causal features to enable easy interpretation of RL even for no