Bayesian Inverse Constraint Reinforcement Learning (BICRL) is a novel approach that infers a posterior probability distribution over constraints from demonstrated trajectories. The main advantages of BICRL, compared to prior constraint inference algorithms, are (1) the freedom to infer constraints from partial trajectories and even from disjoint state-action pairs, (2) the ability to infer constraints from suboptimal demonstrations and in stochastic environments, and (3) the opportunity to use the posterior distribution over constraints in order to implement active learning and robust policy optimization techniques.
The code was written in Python >=3.6
To install the requirements:
pip install -r requirements.txt
If you find this paper interesting and relevant to your work you can cite it as follows:
@article{papadimitriou2022bayesian,
title={Bayesian Methods for Constraint Inference in Reinforcement Learning},
author={Papadimitriou, Dimitris and Anwar, Usman and Brown, Daniel S.},
journal={Transactions on Machine Learning Research},
year={2022},
url={https://openreview.net/forum?id=oRjk5V9eDp }
}