This is a concise Pytorch implementation of QMIX and VDN in StarCraft II environment(SMAC-StarCraft Multi-Agent Challenge).
You can dircetly run 'QMIX_SMAC_main.py' in your own IDE.
If you want to use QMIX, you can set the paraemeter 'algorithm' = 'QMIX';
If you want to use VDN, you can set the paraemeter 'algorithm' = 'VDN'.
You can set the 'env_index' in the codes to change the maps in StarCraft II. Here, we train our code in 3 maps.
env_index=0 represent '3m'
env_index=1 represent '8m'
env_index=2 represent '2s_3z'
python==3.7.9
numpy==1.19.4
pytorch==1.12.0
tensorboard==0.6.0
SMAC-StarCraft Multi-Agent Challenge
[1] Rashid T, Samvelyan M, Schroeder C, et al. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning[C]//International Conference on Machine Learning. PMLR, 2018: 4295-4304.
[2] Sunehag P, Lever G, Gruslys A, et al. Value-decomposition networks for cooperative multi-agent learning[J]. arXiv preprint arXiv:1706.05296, 2017.
[3] EPyMARL.
[4] https://github.com/starry-sky6688/StarCraft.