Skip to content

Latest commit

 

History

History
 
 

4.MADDPG_MATD3_MPE

MADDPG and MATD3 in MPE environment

This is a concise Pytorch implementation of MADDPG and MATD3 in MPE environment(Multi-Agent Particle-World Environment).

How to use my code?

You can dircetly run 'MADDPG_MATD3_main.py' in your own IDE.
If you want to use MADDPG, you can set the paraemeter 'algorithm' = 'MADDPG';
If you want to use MATD3, you can set the paraemeter 'algorithm' = 'MATD3'.

Requirements

python==3.7.9
numpy==1.19.4
pytorch==1.12.0
tensorboard==0.6.0
gym==0.10.5
Multi-Agent Particle-World Environment(MPE)

Trainning environments

You can set the 'env_index' in the code to change the environments in MPE.
env_index=0 represent 'simple_speaker_listener'
env_index=1 represent 'simple_spread'

Trainning result

image

Reference

[1] Lowe R, Wu Y I, Tamar A, et al. Multi-agent actor-critic for mixed cooperative-competitive environments[J]. Advances in neural information processing systems, 2017, 30.
[2] Ackermann J, Gabler V, Osa T, et al. Reducing overestimation bias in multi-agent domains using double centralized critics[J]. arXiv preprint arXiv:1910.01465, 2019.