Welcome to the repository for our paper, "Hierarchical Decision Making Based on Structural Information Principles". This repository contains the code and supplementary materials for the SIDM framework, designed to improve hierarchical policy learning in both single-agent and multi-agent scenarios through structural information principles.
- Adaptive Environmental Abstraction: Utilizes historical state-action information to create abstract representations of states and actions.
- Directed Structural Entropy: Quantifies transition probabilities between abstract states to facilitate unsupervised skill discovery.
- Skill-Based Learning: Enhances single-agent policy learning by integrating various RL algorithms.
- Role-Based Collaboration: Improves multi-agent collaboration and performance through role-based methods.
- Performance Improvements: Achieves up to 32.70%, 64.86%, and 88.26% improvements in effectiveness, efficiency, and stability, respectively, compared to state-of-the-art baselines.
If you use this code in your research, please cite our paper:
@article{zeng2024SIDM,
title={Hierarchical Decision Making Based on Structural Information Principles},
author={Xianghua Zeng and Hao Peng and Dingli Su and Angsheng Li},
journal={submitting to Journal of Machine Learning Research},
year={2024},
}
Thank you for using SIDM! We hope our framework helps advance your research in hierarchical reinforcement learning.