Skip to content

Latest commit

 

History

History
93 lines (62 loc) · 3.62 KB

README.md

File metadata and controls

93 lines (62 loc) · 3.62 KB

SAN-FAPL

This repository contains the source code for our paper: "Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation", accepted to IROS-2022. For more details, please refer to our project website.

Abstract

Socially aware robot navigation, where a robot is required to optimize its trajectory to maintain comfortable and compliant spatial interactions with humans in addition to reaching its goal without collisions, is a fundamental yet challenging task in the context of human-robot interaction. While existing learning-based methods have achieved better performance than the preceding model-based ones, they still have drawbacks: reinforcement learning depends on the handcrafted reward that is unlikely to effectively quantify broad social compliance, and can lead to reward exploitation problems; meanwhile, inverse reinforcement learning suffers from the need for expensive human demonstrations. In this paper, we propose a feedback-efficient active preference learning approach, FAPL, that distills human comfort and expectation into a reward model to guide the robot agent to explore latent aspects of social compliance. We further introduce hybrid experience learning to improve the efficiency of human feedback and samples, and evaluate benefits of robot behaviors learned from FAPL through extensive simulation experiments and a user study (N=10) employing a physical robot to navigate with human subjects in real-world scenarios.

Overview Architecture for FAPL

Set Up

  1. Install the required python package
pip install -r requirements.txt
  1. Install Human-in-Loop RL environment

  2. Install Python-RVO2 library

  3. Install Environment and Navigation into pip

pip install -e .

Run the code

  1. Develop expert demonstrations in hybrid experience learning.
python demonstration_api.py --vis
  1. Train a policy with preference learning.
python train_FAPL.py 
  1. Test a policy.
python test_FAPL.py
  1. Plot training curves.
python plot.py

(The code was tested in Ubuntu 18.04 with Python 3.6.)

Simulation Environment

The simulation environment is a 22 m × 20 m two-dimensional space, and the yellow circle indicates the robot. The blue dotted line illustrates the robot FoV, humans that can be detected by the robot are green circles while those out of robot view are red circles. The red star is the robot goal, and the orientation and number of each agent are presented as a red arrow and a black number respectively.

Learning Curve

Citation

If you find the code or the paper useful for your research, please cite our paper:

@article{wang2022feedback,
  title={Feedback-efficient Active Preference Learning for Socially Aware Robot Navigation},
  author={Wang, Ruiqi and Wang, Weizheng and Min, Byung-Cheol},
  journal={arXiv preprint arXiv:2201.00469},
  year={2022}
}

Acknowledgement

Contributors:
Weizheng Wang; Ruiqi Wang.

Part of the code is based on the following repositories:

CrowdNav; DSRNN_CrowdNav; and B_Pref.