Releases: praveen-palanisamy/macad-gym
MACAD-Gym v0.1.5
MACAD-Gym is a training platform for Multi-Agent Connected Autonomous
Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.
MACAD-Gym provides OpenAI Gym-compatible learning environments for various
driving scenarios for training Deep RL algorithms in homogeneous/heterogenous,
communicating/non-communicating and other multi-agent settings. New environments and scenarios
can be easily added using a simple, JSON-like configuration.
Quick Start
Install MACAD-Gym using pip install macad-gym
.
If you have CARLA_SERVER
setup, you can get going using the following 3 lines of code. If not, follow the Getting started steps.
Training RL Agents
import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
# Your agent code here
Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The MACAD-Agents repository provides sample agents as a starter.
Visualizing the Environment
To test-drive the environments, you can run the environment script directly. For example, to test-drive the HomoNcomIndePOIntrxMASS3CTWN3-v0
environment, run:
python -m macad_gym.envs.homo.ncom.inde.po.intrx.ma.stop_sign_3c_town03
See full README for more information.
Summary of updates in v0.1.5
- Update readme, add citation.cff @praveen-palanisamy (#75)
- Fix multi view render @praveen-palanisamy (#74)
- Npc traffic spawning feature @johnMinelli (#70)
- Add support for Windows platform and some bug fixes @Morphlng (#65)
MACAD-Gym v0.1.4
MACAD-Gym is a training platform for Multi-Agent Connected Autonomous
Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.
MACAD-Gym provides OpenAI Gym-compatible learning environments for various
driving scenarios for training Deep RL algorithms in homogeneous/heterogenous,
communicating/non-communicating and other multi-agent settings. New environments and scenarios
can be easily added using a simple, JSON-like configuration.
Quick Start
Install MACAD-Gym using pip install macad-gym
.
If you have CARLA installed, you can get going using the following 3 lines of code. If not, follow the Getting started steps.
import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
# Your agent code here
Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The MACAD-Agents repository provides sample agents as a starter.
See full README for more information.
Summary of updates in v0.1.4
- Update
Pedestrian -> Walker
in actor type - Yaml -> yml
- Update version number in docs conf
- Add env.close() to properly cleanup sim server proc @praveen-palanisamy (#25)
- Improve code maintainability @praveen-palanisamy (#18)
- Added py pkg badges to README @praveen-palanisamy (#17)
MACAD-Gym v0.1.3
MACAD-Gym is a training platform for Multi-Agent Connected Autonomous
Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.
MACAD-Gym provides OpenAI Gym-compatible learning environments for various
driving scenarios for training Deep RL algorithms in homogeneous/heterogenous,
communicating/non-communicating and other multi-agent settings. New environments and scenarios
can be easily added using a simple, JSON-like configuration.
Quick Start
Install MACAD-Gym using pip install macad-gym
.
If you have CARLA installed, you can get going using the following 3 lines of code. If not, follow the
Getting started steps.
import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
# Your agent code here
Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The MACAD-Agents repository provides sample agents as a starter.
See full README for more information.
Summary of updates in v0.1.3
- Updated python package version @praveen-palanisamy (#16)
- Added github action for pub to PyPI on creation of a release
- Fixed release-drafter config: yaml value should be str @praveen-palanisamy (#12)
- Added no-response bot @praveen-palanisamy (#11)
- Added release-drafter @praveen-palanisamy (#10)
- Added example for a basic agent script @praveen-palanisamy (#9)
- Added fixed_delta_seconds when running in synchronous mode to allow for proper physics sub-stepping in sync @praveen-palanisamy (#8)
- Fixed typo and dict access in Agent interface example
- Updated README
- Added NeurIPS paper info to README
MACAD-Gym v0.1.2
MACAD-Gym is a training platform for Multi-Agent Connected Autonomous
Driving (MACAD) built on top of the CARLA Autonomous Driving simulator.
MACAD-Gym provides OpenAI Gym-compatible learning environments for various
driving scenarios for training Deep RL algorithms in homogeneous/heterogenous,
communicating/non-communicating and other multi-agent settings. New environments and scenarios
can be easily added using a simple, JSON-like configuration.
Quick Start
Install MACAD-Gym using pip install macad-gym
.
If you have CARLA installed, you can get going using the following 3 lines of code. If not, follow the
Getting started steps.
import gym
import macad_gym
env = gym.make("HomoNcomIndePOIntrxMASS3CTWN3-v0")
# Your agent code here
Any RL library that supports the OpenAI-Gym API can be used to train agents in MACAD-Gym. The MACAD-Agents repository provides sample agents as a starter.
See full README for more information.