Simulation verification and physical deployment of robot reinforcement learning algorithms, suitable for quadruped robots, wheeled robots, and humanoid robots. "sar" stands for "simulation and real"
Clone the code
git clone https://github.com/fan-ziqi/rl_sar.git
This project relies on ROS Noetic (Ubuntu 20.04)
After installing ROS, install the dependency library
sudo apt install ros-noetic-teleop-twist-keyboard ros-noetic-controller-interface ros-noetic-gazebo-ros-control ros-noetic-joint-state-controller ros-noetic-effort-controllers ros-noetic-joint-trajectory-controller
Download and deploy libtorch
at any location
cd /path/to/your/torchlib
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.0.1%2Bcpu.zip
unzip libtorch-cxx11-abi-shared-with-deps-2.0.1+cpu.zip -d ./
echo 'export Torch_DIR=/path/to/your/torchlib' >> ~/.bashrc
Install yaml-cpp
git clone https://github.com/jbeder/yaml-cpp.git
cd yaml-cpp && mkdir build && cd build
cmake -DYAML_BUILD_SHARED_LIBS=on .. && make
sudo make install
sudo ldconfig
Install lcm
git clone https://github.com/lcm-proj/lcm.git
cd lcm && mkdir build && cd build
cmake .. && make
sudo make install
sudo ldconfig
Customize the following two functions in your code to adapt to different models:
torch::Tensor forward() override;
torch::Tensor compute_observation() override;
Then compile in the root directory
cd ..
catkin build
Before running, copy the trained pt model file to rl_sar/src/rl_sar/models/YOUR_ROBOT_NAME
, and configure the parameters in config.yaml
.
Open a new terminal, launch the gazebo simulation environment
source devel/setup.bash
roslaunch rl_sar gazebo_<ROBOT>.launch
Where <ROBOT> can be a1
or gr1t1
.
Press 0 on the keyboard to switch the robot to the default standing position, press P to switch to RL control mode, and press 1 in any state to switch to the initial lying position. WS controls x-axis, AD controls yaw, and JL controls y-axis.
Press R to reset Gazebo environment.
Unitree A1 can be connected using both wireless and wired methods:
- Wireless: Connect to the Unitree starting with WIFI broadcasted by the robot (Note: Wireless connection may lead to packet loss, disconnection, or even loss of control, please ensure safety)
- Wired: Use an Ethernet cable to connect any port on the computer and the robot, configure the computer IP as 192.168.123.162, and the gateway as 255.255.255.0
Open a new terminal and start the control program
source devel/setup.bash
rosrun rl_sar rl_real_a1
Press the R2 button on the controller to switch the robot to the default standing position, press R1 to switch to RL control mode, and press L2 in any state to switch to the initial lying position. The left stick controls x-axis up and down, controls yaw left and right, and the right stick controls y-axis left and right.
OR Press 0 on the keyboard to switch the robot to the default standing position, press P to switch to RL control mode, and press 1 in any state to switch to the initial lying position. WS controls x-axis, AD controls yaw, and JL controls y-axis.
In the following, let ROBOT represent the name of your robot.
- Create a model package named ROBOT_description in the robots folder. Place the URDF model in the urdf path within the folder and name it ROBOT.urdf. Create a namespace named ROBOT_gazebo in the config folder within the model file for joint configuration.
- Place the model file in models/ROBOT.
- Add a new field in rl_sar/config.yaml named ROBOT and adjust the parameters, such as changing the model_name to the model file name from the previous step.
- Add a new launch file in the rl_sar/launch folder. Refer to other launch files for guidance on modification.
- Change ROBOT_NAME to ROBOT in rl_xxx.cpp.
- Compile and run.
Please cite the following if you use this code or parts of it:
@software{fan-ziqi2024rl_sar,
author = {fan-ziqi},
title = {{rl_sar: Simulation Verification and Physical Deployment of Robot Reinforcement Learning Algorithm.}},
url = {https://github.com/fan-ziqi/rl_sar},
year = {2024}
}