Explorer is a PyTorch reinforcement learning framework for exploring new ideas.
- Vanilla Deep Q-learning (VanillaDQN): No target network.
- Deep Q-Learning (DQN)
- Double Deep Q-learning (DDQN)
- Maxmin Deep Q-learning (MaxminDQN)
- Averaged Deep Q-learning (AveragedDQN)
- Ensemble Deep Q-learning (EnsembleDQN)
- REINFORCE
- REINFORCE with Baseline
- Actor-Critic
- Syncrhonous Advantage Actor-Critic (A2C)
- Proximal Policy Optimisation (PPO)
- Soft Actor-Critic (SAC)
- Deep Deterministic Policy Gradients (DDPG)
- Twin Delayed Deep Deterministic Policy Gradients (TD3)
- Add more pygames to gym-games
Base Agent
├── Vanalla DQN
| ├── DQN ── DDQN
| ├── Maxmin DQN ── Ensemble DQN
| └── Averaged DQN
└── REINFORCE
├── REINFORCE with Baseline
| ├── Actor-Critic
| └── A2C
| ├── PPO
| └── RepOnPG (experimental)
└── SAC ── DDPG
├── TD3
└── RepOffPG (experimental)
- Python (>=3.6)
- PyTorch
- Gym && Gym Games: You may only install part of Gym (
classic_control, box2d
) by commandpip install 'gym[classic_control, box2d]'
. - Optional: Gym Atari, Gym Mujoco
- PyBullet:
pip install pybullet
- Others: Please check
requirements.txt
.
All hyperparameters including parameters for grid search are stored in a configuration file in directory configs
. To run an experiment, a configuration index is first used to generate a configuration dict corresponding to this specific configuration index. Then we run an experiment defined by this configuration dict. All results including log files and the model file are saved in directory logs
. Please refer to the code for details.
For example, run the experiment with configuration file catcher.json
and configuration index 1
:
python main.py --config_file ./configs/catcher.json --config_idx 1
The models are tested for one episode after every test_per_episodes
training episodes which can be set in the configuration file.
First, we calculate the number of total combinations in a configuration file (e.g. catcher.json
):
python utils/sweeper.py
The output will be:
Number of total combinations in catcher.json: 90
Then we run through all configuration indexes from 1
to 90
. The simplest way is a bash script:
for index in {1..90}
do
python main.py --config_file ./configs/catcher.json --config_idx $index
done
Parallel is usually a better choice for scheduling a large number of jobs:
parallel --eta --ungroup python main.py --config_file ./configs/catcher.json --config_idx {1} ::: $(seq 1 90)
Any configuration index that has the same remainder (divided by the number of total combinations) should has the same configuration dict. So for multiple runs, we just need to add the number of total combinations to the configuration index. For example, 5 runs for configuration index 1
:
for index in 1 91 181 271 361
do
python main.py --config_file ./configs/catcher.json --config_idx $index
done
Or a simpler way:
parallel --eta --ungroup python main.py --config_file ./configs/catcher.json --config_idx {1} ::: $(seq 1 90 450)
To reproduce the results in the Maxmin Q-learning paper, please run experiments with the configuration files provided in directory configs
(except for atari_ram.json
which is experimental).
To analysis the experimental results, just run:
python analysis.py
Inside analysis.py
, unfinished_index
will print out the configuration indexes of unfinished jobs based on the existence of the result file. memory_info
will print out the memory usage information and generate a histogram to show the distribution of memory usages in directory logs/catcher/0
. Similarly, time_info
will print out the time information and generate a histogram to show the distribution of time in directory logs/catcher/0
. Finally, analyze
will generate csv
files that store training and test results. More functions are available in utils/plotter.py
.
Enjoy!
Please use this bibtex to cite this repo
@misc{Explorer,
author = {Lan, Qingfeng},
title = {A PyTorch Reinforcement Learning Framework for Exploring New Ideas},
year = {2019},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/qlan3/Explorer}}
}