Code for the paper:
Learning Bipedal Walking On Planned Footsteps For Humanoid Robots
Rohan P. Singh, Mehdi Benallegue, Mitsuharu Morisawa, Rafael Cisneros, Fumio Kanehiro
A rough outline for the repository that might be useful for adding your own robot:
LearningHumanoidWalking/
├── envs/ <-- Actions and observation space, PD gains, simulation step, control decimation, init, ...
├── tasks/ <-- Reward function, termination conditions, and more...
├── rl/ <-- Code for PPO, actor/critic networks, observation normalization process...
├── models/ <-- MuJoCo model files: XMLs/meshes/textures
├── trained/ <-- Contains pretrained model for JVRC
└── scripts/ <-- Utility scripts, etc.
- Python version: 3.7.11
- Pytorch
- pip install:
- mujoco==2.1.5
- mujoco-python-viewer
- ray==1.9.2
- transforms3d
Environment names supported:
Task Description | Environment name |
---|---|
Basic Walking Task | 'jvrc_walk' |
Stepping Task (using footsteps) | 'jvrc_step' |
$ python run_experiment.py train --logdir <path_to_exp_dir> --num_procs <num_of_cpu_procs> --env <name_of_environment>
We need to write a script specific to each environment.
For example, debug_stepper.py
can be used with the jvrc_step
environment.
$ PYTHONPATH=.:$PYTHONPATH python scripts/debug_stepper.py --path <path_to_exp_dir>
If you find this work useful in your own research:
@article{singh2022learning,
title={Learning Bipedal Walking On Planned Footsteps For Humanoid Robots},
author={Singh, Rohan Pratap and Benallegue, Mehdi and Morisawa, Mitsuharu and Cisneros, Rafael and Kanehiro, Fumio},
journal={arXiv preprint arXiv:2207.12644},
year={2022}
}