Skip to content
forked from qlan3/Explorer

Explorer is a PyTorch reinforcement learning framework for exploring new ideas.

License

Notifications You must be signed in to change notification settings

JackZhangY/Explorer

 
 

Repository files navigation

Explorer

Explorer is a PyTorch reinforcement learning framework for exploring new ideas.

Implemented algorithms

To do list

  • Add more pygames to gym-games

The dependency tree of agent classes

Base Agent
  ├── Vanalla DQN
  |     ├── DQN ── DDQN
  |     ├── Maxmin DQN ── Ensemble DQN
  |     └── Averaged DQN
  └── REINFORCE 
        ├── REINFORCE with Baseline
        |     ├── Actor-Critic
        |     └── A2C
        |          ├── PPO
        |          └── RepOnPG (experimental)
        └── SAC ── DDPG
                    ├── TD3
                    └── RepOffPG (experimental)

Requirements

  • Python (>=3.6)
  • PyTorch
  • Gym && Gym Games: You may only install part of Gym (classic_control, box2d) by command pip install 'gym[classic_control, box2d]'.
  • Optional: Gym Atari, Gym Mujoco
  • PyBullet: pip install pybullet
  • Others: Please check requirements.txt.

Experiments

Train && Test

All hyperparameters including parameters for grid search are stored in a configuration file in directory configs. To run an experiment, a configuration index is first used to generate a configuration dict corresponding to this specific configuration index. Then we run an experiment defined by this configuration dict. All results including log files and the model file are saved in directory logs. Please refer to the code for details.

For example, run the experiment with configuration file catcher.json and configuration index 1:

python main.py --config_file ./configs/catcher.json --config_idx 1

The models are tested for one episode after every test_per_episodes training episodes which can be set in the configuration file.

Grid Search (Optional)

First, we calculate the number of total combinations in a configuration file (e.g. catcher.json):

python utils/sweeper.py

The output will be:

Number of total combinations in catcher.json: 90

Then we run through all configuration indexes from 1 to 90. The simplest way is a bash script:

for index in {1..90}
do
  python main.py --config_file ./configs/catcher.json --config_idx $index
done

Parallel is usually a better choice for scheduling a large number of jobs:

parallel --eta --ungroup python main.py --config_file ./configs/catcher.json --config_idx {1} ::: $(seq 1 90)

Any configuration index that has the same remainder (divided by the number of total combinations) should has the same configuration dict. So for multiple runs, we just need to add the number of total combinations to the configuration index. For example, 5 runs for configuration index 1:

for index in 1 91 181 271 361
do
  python main.py --config_file ./configs/catcher.json --config_idx $index
done

Or a simpler way:

parallel --eta --ungroup python main.py --config_file ./configs/catcher.json --config_idx {1} ::: $(seq 1 90 450)

To reproduce the results in the Maxmin Q-learning paper, please run experiments with the configuration files provided in directory configs (except for atari_ram.json which is experimental).

Analysis (Optional)

To analysis the experimental results, just run:

python analysis.py

Inside analysis.py, unfinished_index will print out the configuration indexes of unfinished jobs based on the existence of the result file. memory_info will print out the memory usage information and generate a histogram to show the distribution of memory usages in directory logs/catcher/0. Similarly, time_info will print out the time information and generate a histogram to show the distribution of time in directory logs/catcher/0. Finally, analyze will generate csv files that store training and test results. More functions are available in utils/plotter.py.

Enjoy!

Cite

Please use this bibtex to cite this repo

@misc{Explorer,
  author = {Lan, Qingfeng},
  title = {A PyTorch Reinforcement Learning Framework for Exploring New Ideas},
  year = {2019},
  publisher = {GitHub},
  journal = {GitHub Repository},
  howpublished = {\url{https://github.com/qlan3/Explorer}}
}

Acknowledgements

About

Explorer is a PyTorch reinforcement learning framework for exploring new ideas.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%