Skip to content

A python interface for training Reinforcement Learning bots to battle on pokemon showdown

License

Notifications You must be signed in to change notification settings

cameronangliss/poke-env

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The pokemon showdown Python environment

PyPI version fury.io PyPI pyversions License: MIT Documentation Status codecov

A Python interface to create battling pokemon agents. poke-env offers an easy-to-use interface for creating rule-based or training Reinforcement Learning bots to battle on pokemon showdown.

A simple agent in action

Getting started

Agents are instance of python classes inheriting from Player. Here is what your first agent could look like:

class YourFirstAgent(Player):
    def choose_move(self, battle):
        for move in battle.available_moves:
            if move.base_power > 90:
                # A powerful move! Let's use it
                return self.create_order(move)

        # No available move? Let's switch then!
        for switch in battle.available_switches:
            if switch.current_hp_fraction > battle.active_pokemon.current_hp_fraction:
                # This other pokemon has more HP left... Let's switch it in?
                return self.create_order(switch)

        # Not sure what to do?
        return self.choose_random_move(battle)

To get started, take a look at our documentation!

Documentation and examples

Documentation, detailed examples and starting code can be found on readthedocs.

Installation

This project requires python >= 3.9 and a Pokemon Showdown server.

pip install poke-env

You can use smogon's server to try out your agents against humans, but having a development server is strongly recommended. In particular, it is recommended to use the --no-security flag to run a local server with most rate limiting and throttling turned off. Please refer to the docs for detailed setup instructions.

git clone https://github.com/smogon/pokemon-showdown.git
cd pokemon-showdown
npm install
cp config/config-example.js config/config.js
node pokemon-showdown start --no-security

Development version

You can also clone the latest master version with:

git clone https://github.com/hsahovic/poke-env.git

Dependencies and development dependencies can then be installed with:

pip install -r requirements.txt
pip install -r requirements-dev.txt

Acknowledgements

This project is a follow-up of a group project from an artifical intelligence class at Ecole Polytechnique.

You can find the original repository here. It is partially inspired by the showdown-battle-bot project. Of course, none of these would have been possible without Pokemon Showdown.

Team data comes from Smogon forums' RMT section.

Data

Data files are adapted version of the js data files of Pokemon Showdown.

License

License: MIT

Citing poke-env

@misc{poke_env,
    author       = {Haris Sahovic},
    title        = {Poke-env: pokemon AI in python},
    url          = {https://github.com/hsahovic/poke-env}
}

About

A python interface for training Reinforcement Learning bots to battle on pokemon showdown

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.0%
  • Other 1.0%