Skip to content

lhkwok9/suite2mimic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Installation

Requirements

  • Linux machine
  • python 3.8.0
  • conda

Option 1

Follow installation of robomimic.


Option 2 (tested and adapted from Option 1)

Create and activate conda environemnt

conda create -n robomimic python=3.8.0
conda activate robomimic

Install Pytorch that match the cuda version

Check CUDA Version:

nvidia-smi

For example, for CUDA Version 11.4:

conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch

Install suite2mimic

cd ~
git clone https://github.com/lhkwok9/suite2mimic.git

Install robomimic from source

cd ~/suite2mimic
git clone https://github.com/ARISE-Initiative/robomimic.git
cd robomimic
pip install -e .

Install robosuite from source (simulator)

Note: git checkout is for reproducing experiments of robomimic

cd ~/suite2mimic
git clone https://github.com/ARISE-Initiative/robosuite.git
cd robosuite
git checkout v1.4.1
pip install -r requirements.txt

Test installation (optional)

This assumes you follow option 2. Run a quick debugging 2 epoch training, record testing videos and save the models:

cd ~/suite2mimic/robomimic
python examples/train_bc_rnn.py --debug

EGL exception is normal and is ignored.

Run a much more thorough test of several algorithms and scripts:

cd ~/suite2mimic/robomimic/tests
bash test.sh

Robomimic dataset

The following steps are demonstration for hdf5 file with robomimic format (data_collection/NutAssemblySquare_Aug8_hdf5/demo.hdf5)

  1. extract observation from mujoco states:
# For low dimensional observations only, with done on task success
cd ~/suite2mimic/robomimic/robomimic/scripts
python dataset_states_to_obs.py --dataset ../../../data_collection/NutAssemblySquare_Aug8_hdf5/demo.hdf5 --output_name low_dim.hdf5 --done_mode 2

# For including image observations
cd ~/suite2mimic/robomimic/robomimic/scripts
python dataset_states_to_obs.py --dataset ../../../data_collection/NutAssemblySquare_Aug8_hdf5/demo.hdf5 --output_name image.hdf5 --done_mode 2 --camera_names agentview robot0_eye_in_hand --camera_height 84 --camera_width 84

  1. config the training All useful config are in config/custom folders

check the algo_name (e.g. bc, bcq)

check experiment.name (e.g. NutAssemblySquare_Aug8_image)

check train.data (e.g. ~/suite2mimic/data_collection/NutAssemblySquare_Aug8_hdf5/image.hdf5)

check train.output_dir (e.g. ~/suite2mimic/bc_rnn_trained_models)

check observation.modalities.obs.low_dim (e.g. remove "robot0_eef_quat")

  1. start the training
cd ~/suite2mimic/robomimic/robomimic/scripts
python train.py --config ~/suite2mimic/config/custom/square_image/bc_rnn.json

logs, models and testing videos are in the output_dir

  1. viewing result
tensorboard --logdir ~/folder/that/contain/the/logs --bind_all

and open the link generated (close the link before Ctrl-C) (testing success rate is included in the model name)

Robosuite datasets

The following steps are for converting robosuite datasets

  1. get robosuite hdf5 format (edit folder_name, goal_folder_name and robosuite env_name in the program)
cd ~/suite2mimic/scripts
python gather_demonstrations_as_hdf5.py
  1. convert robsuite data to robomimic format (robomimic instruction, edit dataset arg in the command):
cd ~/suite2mimic/robomimic/robomimic/scripts
python conversion/convert_robosuite.py --dataset ../../../data_collection/PickPlaceCan_Jul18_original_hdf5/demo.hdf5
  1. extract observation from mujoco states to get robomimic format dataset:
# For low dimensional observations only, with done on task success
cd ~/suite2mimic/robomimic/robomimic/scripts
python dataset_states_to_obs.py --dataset ../../../data_collection/PickPlaceCan_Jul18_original_hdf5/demo.hdf5 --output_name low_dim.hdf5 --done_mode 2

# For including image observations
cd ~/suite2mimic/robomimic/robomimic/scripts
python dataset_states_to_obs.py --dataset ../../../data_collection/PickPlaceCan_Jul18_original_hdf5/demo.hdf5 --output_name image.hdf5 --done_mode 2 --camera_names agentview robot0_eye_in_hand --camera_height 84 --camera_width 84

  1. refer to steps for Robomimic dataset above

Possible error 1:

ValueError: No "geom" with name ... exists.

Reason to error 1: the model.xml from the raw data of robosuite does not match that of robosuite that we are using

Solution to error 1: run the following line and Ctrl-C after a few seconds

python /home/jk/suite2mimic/robosuite/robosuite/scripts/collect_human_demonstrations.py --environment PickPlaceCan

then check the difference and find the missing geom between the model.xml from the raw data and the model.xml generated in /tmp

then add the geom line to the eldest file in the demo (e.g. add<geom name="robot0_link7_collision" type="mesh" rgba="0 0.5 0 1" mesh="robot0_link7"/> after line 281 in ep_1689660868_9146771/model.xml)

then copy the eldest xml to other demo data in the folder (edit the data_folder in the program):

cd ~/suite2mimic/scripts
python add7collision.py

then delete all the hdf5 file generated b4 and start from step 1 again


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages