OpenAI Gym Xarm7 robot environment implemented with PyBullet.
git clone https://github.com/jc-bao/gym-xarm.git
cd gym-xarm
pip install -e .
import gym_xarm
env = gym.make('XarmReach-v0')
env.reset()
for _ in range(env._max_episode_timesteps):
env.render()
obs, reward, done, info = env.step(env.action_space.sample())
env.close()
In the test environment, the robot will take random actions.
python test.py
python train.py --algo a2c --env XarmPDHandoverDenseEnvNoGoal-v1
XarmReach-v0 | XarmPickAndPlace-v0 | XarmPDPickAndPlace-v0 |
---|---|---|
XarmPDStackTower-v0 | XarmPDRearrange-v0 | XarmPDPushWithDoor-v0 |
XarmPDOpenBoxAndPlace-v0 | XarmPDHandover-v0 | |
⚠️ Note:
XarmPickAndPlace-v0
uses Xarm gripper, which can not be constrained in Pybullet. This will result in severe slippage or distortion in gripper shape. Bothp.createConstraint()
andp.setJointMotorControl2()
has been tried, they are helpless in this situation even if we set a extremly large force or friction coefficient.- So I recommend to use Panda gripper (the register ID is
XarmPDPickAndPlace-v0
) which needs less constrains and has a better performance in fetch tasks.