Official implementation of the Director algorithm in TensorFlow 2.
If you find this code useful, please reference in your paper:
@article{hafner2022director,
title={Deep Hierarchical Planning from Pixels},
author={Hafner, Danijar and Lee, Kuang-Huei and Fischer, Ian and Abbeel, Pieter},
journal={Advances in Neural Information Processing Systems},
year={2022}
}
Director is a practical and robust algorithm for hierarchical reinforcement learning. To solve long horizon tasks end-to-end from sparse rewards, Director learns to break down tasks into internal subgoals. Its manager policy selects subgoals that trade off exploratory and extrinsic value, its worker policy learns to achieve the goals through low-level actions. Both policies are trained from imagined trajectories predicted by a learned world model. To support the manager in choosing realistic goals, a goal autoencoder compresses and quantizes previously encountered representations. The manager chooses its goals in this compact space. All components are trained concurrently.
For more information:
Either use embodied/Dockerfile
or follow the manual instructions below.
Install dependencies:
pip install -r requirements.txt
Train agent:
python embodied/agents/director/train.py \
--logdir ~/logdir/$(date +%Y%m%d-%H%M%S) \
--configs dmc_vision \
--task dmc_walker_walk
See agents/director/configs.yaml
for available flags and
embodied/envs/__init__.py
for available envs.
The HRL environments are in embodied/envs/pinpad.py
and
embodied/envs/loconav.py
.
For video summary using FFmpeg,
export LD_LIBRARY_PATH=/content/conda-env/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64