Skip to content

Commit

Permalink
Merge pull request #10 from robinhenry/update-requirements
Browse files Browse the repository at this point in the history
Update requirements and CI checks
  • Loading branch information
robinhenry authored Nov 27, 2022
2 parents 231097e + 337b2d7 commit f054251
Show file tree
Hide file tree
Showing 49 changed files with 2,191 additions and 1,281 deletions.
33 changes: 33 additions & 0 deletions .github/workflows/ci_checks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
name: Tests pip
on: [push]

jobs:
checks:
strategy:
max-parallel: 6
matrix:
python-version: ["3.8", "3.9", "3.10"]
poetry-version: ["1.2"]
os: [ubuntu-latest]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install poetry ${{ matrix.poetry-version }}
uses: abatilo/actions-poetry@v2
with:
poetry-version: ${{ matrix.poetry-version }}
- name: Install dependencies
run: poetry install
- name: Run black
run: poetry run black --check .
- name: Test with pytest
run: poetry run pytest --cov --cov-report=xml --cov-report=html
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
token: ${{ secrets.CODECOV_TOKEN }}
36 changes: 0 additions & 36 deletions .github/workflows/ci_conda.yml

This file was deleted.

31 changes: 0 additions & 31 deletions .github/workflows/ci_pip.yml

This file was deleted.

32 changes: 32 additions & 0 deletions .github/workflows/ci_release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Release
on:
release:
types:
- created

jobs:
publish:
strategy:
fail-fast: false
matrix:
python-version: [3.10]
poetry-version: [1.2]
os: [ubuntu-latest]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install poetry ${{ matrix.poetry-version }}
uses: abatilo/[email protected]
with:
poetry-version: ${{ matrix.poetry-version }}
- name: Publish to pypi
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
run: |
poetry config pypi-token.pypi $PYPI_TOKEN
poetry publish --build
19 changes: 9 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@
`gym-anm` is a framework for designing reinforcement learning (RL) environments that model Active Network
Management (ANM) tasks in electricity distribution networks. It is built on top of the
[OpenAI Gym](https://github.com/openai/gym) toolkit.

The `gym-anm` framework was designed with one goal in mind: **bridge the gap between research in RL and in
the management of power systems**. We attempt to do this by providing RL researchers with an easy-to-work-with
library of environments that model decision-making tasks in power grids.

**Papers:**
**Papers:**
* [Gym-ANM: Reinforcement Learning Environments for Active Network Management Tasks in Electricity Distribution Systems](https://doi.org/10.1016/j.egyai.2021.100092)
* [Gym-ANM: Open-source software to leverage reinforcement learning for power system management in research and education](https://doi.org/10.1016/j.simpa.2021.100092)

## Key features
* Very little background in electricity systems modelling it required. This makes `gym-anm` an ideal starting point
for RL students and researchers looking to enter the field.
Expand All @@ -26,14 +26,14 @@ library of environments that model decision-making tasks in power grids.
* The flexibility of `gym-anm`, with its different customizable components, makes it a suitable framework
to model a wide range of ANM tasks, from simple ones that can be used for educational purposes, to complex ones
designed to conduct advanced research.

## Documentation
Documentation is provided online at [https://gym-anm.readthedocs.io/en/latest/](https://gym-anm.readthedocs.io/en/latest/).

## Installation

### Requirements
`gym-anm` requires Python 3.7+ and can run on Linux, MaxOS, and Windows.
`gym-anm` requires Python 3.8+ and can run on Linux, MaxOS, and Windows.

We recommend installing `gym-anm` in a Python environment (e.g., [virtualenv](https://virtualenv.pypa.io/en/latest/)
or [conda](https://conda.io/en/latest/#)).
Expand Down Expand Up @@ -63,13 +63,13 @@ import time
def run():
env = gym.make('gym_anm:ANM6Easy-v0')
o = env.reset()
for i in range(100):
a = env.action_space.sample()
o, r, done, info = env.step(a)
env.render()
time.sleep(0.5) # otherwise the rendering is too fast for the human eye.
env.close()
if __name__ == '__main__':
Expand All @@ -82,8 +82,8 @@ Additional example scripts can be found in [examples/](examples).

## Testing the installation
All unit tests in `gym-anm` can be ran from the project root directory with:
```
python -m tests
```
python -m pytest tests
```

## Contributing
Expand Down Expand Up @@ -120,5 +120,4 @@ All publications derived from the use of `gym-anm` should cite the following two
`gym-anm` is currently maintained by [Robin Henry](https://www.robinxhenry.com/).

## License

This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details.
21 changes: 11 additions & 10 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,18 @@

import os
import sys
sys.path.insert(0, os.path.abspath('../..'))

sys.path.insert(0, os.path.abspath("../.."))


# -- Project information -----------------------------------------------------

project = 'gym-anm'
copyright = '2020, Robin Henry'
author = 'Robin Henry'
project = "gym-anm"
copyright = "2020, Robin Henry"
author = "Robin Henry"

# The full version, including alpha/beta/rc tags
release = 'v1'
release = "v1"


# -- General configuration ---------------------------------------------------
Expand All @@ -37,32 +38,32 @@
"sphinx.ext.ifconfig",
"sphinx.ext.viewcode",
"sphinx.ext.napoleon",
"sphinx_rtd_theme"
"sphinx_rtd_theme",
]
autosummary_generate = True # Turn on sphinx.ext.autosummary

# Autodoc settings
autodoc_member_order = 'groupwise'
autodoc_member_order = "groupwise"

# Napoleon settings
napoleon_google_docstring = False
napoleon_numpy_docstring = True

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['topics/archive']
exclude_patterns = ["topics/archive"]


# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme = "sphinx_rtd_theme"

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
Expand Down
33 changes: 17 additions & 16 deletions examples/custom_anm6.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,38 +16,38 @@
import numpy as np
from gym_anm.envs import ANM6


class CustomANM6Environment(ANM6):
"""A gym-anm task built on top of the ANM6 grid."""

def __init__(self):
observation = 'state' # fully observable environment
K = 1 # 1 auxiliary variable
delta_t = 0.25 # 15min intervals
gamma = 0.9 # discount factor
lamb = 100 # penalty weighting hyperparameter
observation = "state" # fully observable environment
K = 1 # 1 auxiliary variable
delta_t = 0.25 # 15min intervals
gamma = 0.9 # discount factor
lamb = 100 # penalty weighting hyperparameter
aux_bounds = np.array([[0, 10]]) # bounds on auxiliary variable
costs_clipping = (1, 100) # reward clipping parameters
seed = 1 # random seed
costs_clipping = (1, 100) # reward clipping parameters
seed = 1 # random seed

super().__init__(observation, K, delta_t, gamma, lamb,
aux_bounds, costs_clipping, seed)
super().__init__(observation, K, delta_t, gamma, lamb, aux_bounds, costs_clipping, seed)

def init_state(self):
"""Return a state vector with random values in [0, 1]."""
n_dev = self.simulator.N_device # number of devices
n_des = self.simulator.N_des # number of DES units
n_gen = self.simulator.N_non_slack_gen # number of non-slack generators
n_dev = self.simulator.N_device # number of devices
n_des = self.simulator.N_des # number of DES units
n_gen = self.simulator.N_non_slack_gen # number of non-slack generators
s = np.random.rand(2 * n_dev + n_des + n_gen) # random state vector

# Let the auxiliary variable be a time of day index where increments
# represent `self.delta_t` time durations.
# Initial time: 00:00.
aux = 0

return np.hstack((s, aux)) # initial state vector s0
return np.hstack((s, aux)) # initial state vector s0

def next_vars(self, s_t):
""" Generate the next stochastic variables and auxiliary variables."""
"""Generate the next stochastic variables and auxiliary variables."""
next_var = []

# Random demand for residential area in [-10, 0] MW.
Expand All @@ -71,11 +71,12 @@ def next_vars(self, s_t):

return np.array(next_var)

if __name__ == '__main__':

if __name__ == "__main__":
env = CustomANM6Environment()
env.reset()

for t in range(10):
a = env.action_space.sample()
o, r, done, _ = env.step(a)
print(f't={t}, r_t={r:.3}')
print(f"t={t}, r_t={r:.3}")
13 changes: 7 additions & 6 deletions examples/mpc_constant.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,20 @@
import gym
from gym_anm import MPCAgentConstant


def run():
env = gym.make('ANM6Easy-v0')
env = gym.make("ANM6Easy-v0")
o = env.reset()

# Initialize the MPC policy.
agent = MPCAgentConstant(env.simulator, env.action_space, env.gamma,
safety_margin=0.96, planning_steps=10)
agent = MPCAgentConstant(env.simulator, env.action_space, env.gamma, safety_margin=0.96, planning_steps=10)

# Run the policy.
for t in range(100):
a = agent.act(env)
obs, r, done, _ = env.step(a)
print(f't={t}, r_t={r:.3}')
print(f"t={t}, r_t={r:.3}")


if __name__ == '__main__':
run()
if __name__ == "__main__":
run()
13 changes: 7 additions & 6 deletions examples/mpc_perfect.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,19 +10,20 @@
import gym
from gym_anm import MPCAgentPerfect


def run():
env = gym.make('ANM6Easy-v0')
env = gym.make("ANM6Easy-v0")
o = env.reset()

# Initialize the MPC policy.
agent = MPCAgentPerfect(env.simulator, env.action_space, env.gamma,
safety_margin=0.96, planning_steps=10)
agent = MPCAgentPerfect(env.simulator, env.action_space, env.gamma, safety_margin=0.96, planning_steps=10)

# Run the policy.
for t in range(100):
a = agent.act(env)
obs, r, done, _ = env.step(a)
print(f't={t}, r_t={r:.3}')
print(f"t={t}, r_t={r:.3}")


if __name__ == '__main__':
run()
if __name__ == "__main__":
run()
Loading

0 comments on commit f054251

Please sign in to comment.