Skip to content

Commit

Permalink
Remove MAgent content (Farama-Foundation#823)
Browse files Browse the repository at this point in the history
  • Loading branch information
dyodx authored Oct 13, 2022
1 parent 2e5a84d commit bff2b09
Show file tree
Hide file tree
Showing 36 changed files with 7 additions and 2,315 deletions.
1 change: 0 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ repos:
hooks:
- id: codespell
args:
- --ignore-words-list=magent
- --skip=*.css,*.js,*.map,*.scss,*svg
- repo: https://gitlab.com/PyCQA/flake8
rev: 5.0.4
Expand Down
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ PettingZoo includes the following families of environments:
* [Atari](https://pettingzoo.farama.org/environments/atari/): Multi-player Atari 2600 games (cooperative, competitive and mixed sum)
* [Butterfly](https://pettingzoo.farama.org/environments/butterfly): Cooperative graphical games developed by us, requiring a high degree of coordination
* [Classic](https://pettingzoo.farama.org/environments/classic): Classical games including card games, board games, etc.
* [MAgent](https://pettingzoo.farama.org/environments/magent): Configurable environments with massive numbers of particle agents, originally from https://github.com/geek-ai/MAgent
* [MPE](https://pettingzoo.farama.org/environments/mpe): A set of simple nongraphical communication tasks, originally from https://github.com/openai/multiagent-particle-envs
* [SISL](https://pettingzoo.farama.org/environments/sisl): 3 cooperative environments, originally from https://github.com/sisl/MADRL

Expand Down
8 changes: 0 additions & 8 deletions docs/_scripts/gen_envs_display.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,6 @@
"texas_holdem",
"tictactoe",
],
"magent": [
"adversarial_pursuit",
"battle",
"battlefield",
"combined_arms",
"gather",
"tiger_deer",
],
"mpe": [
"simple_adversary",
"simple_crypto",
Expand Down
2 changes: 1 addition & 1 deletion docs/api/supersuit_wrappers.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Supersuit includes the following wrappers:
```{eval-rst}
.. py:function:: agent_indicator_v0(env, type_only=False)
Adds an indicator of the agent ID to the observation, only supports discrete and 1D, 2D, and 3D box. For 1d spaces, the agent ID is converted to a 1-hot vector and appended to the observation (increasing the size of the observation space as necessary). 2d and 3d spaces are treated as images (with channels last) and the ID is converted to *n* additional channels with the channel that represents the ID as all 1s and the other channel as all 0s (a sort of one hot encoding). This allows MADRL methods like parameter sharing to learn policies for heterogeneous agents since the policy can tell what agent it's acting on. Set the `type_only` parameter to parse the name of the agent as `<type>_<n>` and have the appended 1-hot vector only identify the type, rather than the specific agent name. This would, for example give all agents on the red team in the [MAgent battle environment](https://pettingzoo.farama.org/environments/magent/battle) the same agent indicator. This is useful for games where there are many agents in an environment but few types of agents. Agent indication for MADRL was first introduced in *Cooperative Multi-Agent Control Using Deep Reinforcement Learning.*
Adds an indicator of the agent ID to the observation, only supports discrete and 1D, 2D, and 3D box. For 1d spaces, the agent ID is converted to a 1-hot vector and appended to the observation (increasing the size of the observation space as necessary). 2d and 3d spaces are treated as images (with channels last) and the ID is converted to *n* additional channels with the channel that represents the ID as all 1s and the other channel as all 0s (a sort of one hot encoding). This allows MADRL methods like parameter sharing to learn policies for heterogeneous agents since the policy can tell what agent it's acting on. Set the `type_only` parameter to parse the name of the agent as `<type>_<n>` and have the appended 1-hot vector only identify the type, rather than the specific agent name. This is useful for games where there are many agents in an environment but few types of agents. Agent indication for MADRL was first introduced in *Cooperative Multi-Agent Control Using Deep Reinforcement Learning.*
.. py:function:: black_death_v2(env)
Expand Down
71 changes: 0 additions & 71 deletions docs/environments/magent.md

This file was deleted.

82 changes: 0 additions & 82 deletions docs/environments/magent/list.html

This file was deleted.

Binary file not shown.
Binary file removed docs/environments/magent/magent_battle.gif
Binary file not shown.
Binary file removed docs/environments/magent/magent_battlefield.gif
Binary file not shown.
Binary file removed docs/environments/magent/magent_combined_arms.gif
Binary file not shown.
Binary file removed docs/environments/magent/magent_gather.gif
Binary file not shown.
Binary file removed docs/environments/magent/magent_tiger_deer.gif
Binary file not shown.
1 change: 0 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ api/utils
environments/atari
environments/butterfly
environments/classic
environments/magent
environments/mpe
environments/sisl
environments/third_party_envs
Expand Down
5 changes: 0 additions & 5 deletions pettingzoo/magent/__init__.py

This file was deleted.

Loading

0 comments on commit bff2b09

Please sign in to comment.