-
Notifications
You must be signed in to change notification settings - Fork 157
Using Wrapper Class for Custom GYM Env #146
Comments
Thanks Luisenp! I am trying to use custom env built using gym instead of mbrl.env.cartpole_continuous. My action space is Box(1,) and observation space is Box(600, 800, 3). I am running into so many errors trying to use custom env. How can use my custom env with MBRL? |
For that type of observation space, following the PlaNet example would be the most appropriate. Do you have any code samples I can take a look at? |
Thanks for your prompt reply! I am trying to copy mbrl.env.cartpole_continuous for the ChopperScape game. |
It would be much better if you submit a pull request that has your script. We don't need to merge it, but it will make review and discussion much easier. |
Here's the zip file attached. I am really new to this model based RL and barely understand the code, please pardon my ignorance. |
Hi @MishraIN, as I mentioned above, the proper mechanism to do this would be to start a pull request from your fork of the repository. Without one, I'm afraid I won't be able to help you. |
I have a custom open AI gym env and I am trying to use mbrl wrapper but getting error name 'model_env_args' is not defined. I am trying to follow example here, https://arxiv.org/pdf/2104.10159.pdf. Here's my code.
import gym import mbrl.models as models import numpy as np net = models.GaussianMLP(in_size=14, out_size=12, device="cpu") wrapper = models.OneDTransitionRewardModel(net, target_is_delta=True, learned_rewards=True) model_env = models.ModelEnv(wrapper, *model_env_args, term_fn=hopper)
The text was updated successfully, but these errors were encountered: