-
Notifications
You must be signed in to change notification settings - Fork 4
Simulating Robot Models for ROS
http://moorerobots.com/blog/post/1 Software simulation in ROS can help you learn about how to make robots "think". After all, it is a big leap to go from line following robots to self-driving cars. Arguably, one of the biggest challenges in building useful robots is the software. Through ROS, you will be able to understand how the fundamental challenges of using sensors to determine where the robot is (sensing), developing plans to figure out what the robot should do (planning), and then computing commands for the robot to execute (acting).
In this tutorial, we will develop a simulated robot from scratch (well...relative to the turtlebot simulation). There are several reasons why you may wish to go this route.
Simulate your own custom built robot by editing the URDF model (so you aren't stuck with the turtlebot)
Enable development of algorithms that can be plugged into both software and hardware robots (because excessive development on hardware is susceptible to wear and tear)
Exposure to ROS robot architecture (so you can avoid repeating design mistakes others already learned from)
Overview: Simulated Model
Goal: Create a URDF model in the Gazebo simulator that is accessible by ROS.
References: There are a number of helpful tutorials. None of them worked flawlessly for me out of the box. I suspect that they are not up-to-date but it is also possible that I failed to properly follow the instructions:
RRBot
R2D2
Differential Drive (URDF format)
Differential Drive (SDF format)
Result: A simple differential drive model built from scratch that is simulated in Gazebo and visualizable in rviz
Repository Code
git clone -b base https://github.com/richardw05/mybot_ws.git
Video Walkthrough
Compatability
This tutorial will no doubt become dated over time. This is the hardware and software used for this writeup:
Software
ROS: Kinetic
OS: Ubuntu 16.04
OS: Gazebo 7.0.0
Hardware
RAM: 8GB DDR4
SSD: Samsung EVO
CPU: Intel i5
GPU: GTX 750ti
Directory Structure
Most the tutorials converged on the following workspace structure:
mybot_ws
src
mybot_control
config
mybot_control.yaml
launch
mybot_control.launch
CMakeLists.txt
package.xml
mybot_description
launch
mybot_rviz.launch
urdf
macros.xacro
materials.xacro
mybot.gazebo
mybot.xacro
CMakeLists.txt
package.xml
mybot_gazebo
launch
mybot_world.launch
worlds
mybot.world
CMakeLists.txt
package.xml
Brief explanation:
mybot_description specifies the entire robot structure as links and joints and can launch the model in rviz.
mybot_gazebo launches the model in the gazebo environment and contains different simulation worlds.
mybot_control (not used) enable control over joints of our model so that it can move around.
To learn more about creating a workspace/package, check out this tutorial on YouTube.
To create the basic skeleton of the directory structure, we begin with a workspace {WORKSPACE}_ws, where we set {WORKSPACE}=mybot.
cd ~ mkdir mybot_ws cd mybot_ws catkin init mkdir src catkin build echo "source ~/mybot_ws/devel/setup.bash" >> ~/.bashrc # Adds workspace to search path
In the src folder are three main packages, {MODEL}_control, {MODEL}_description, {MODEL}_gazebo. We set {MODEL} = mybot. To create a package: catkin_create_pkg {PKG_NAME} {PKG_DEPENDENCIES}. In this case, we have no {PKG_DEPENDENCIES} = "".
cd ~/mybot_ws/src/ catkin_create_pkg mybot_control catkin_create_pkg mybot_description catkin_create_pkg mybot_gazebo
Create the Robot Model (URDF)
In ~/mybot_ws/src/mybot_description/urdf, there are four files:
mybot.xacro: primary file that loads the other three files and contains only URDF items like joints and links
mybot.gazebo: contains gazebo-specific labels that are wrapped within gaz
materials.xacro: maps strings to colors
macros.xacro: macros to help simplify
Run the Models
Load the Gazebo simulator and rviz in separate terminals
roslaunch mybot_gazebo mybot_world.launch roslaunch mybot_description mybot_rviz.launch
Send a base controller command and ensure that the robot moves in both Gazebo and rviz
rostopic pub /cmd_vel geometry_msgs/Twist "linear: x: 0.2 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.1"
Misc
If the Gazebo plugin cannot be found, install the ros kinetic controllers
sudo apt-get install ros-kinetic-ros-control
sudo apt-get install ros-kinetic-ros-controllers
sudo apt-get install ros-kinetic-gazebo-msgs
sudo apt-get install ros-kinetic-gazebo-ros
If robot is not moving in rviz, check the tf transformation tree
rosrun tf view_frames
evince frames.pdf
If you want to run your own controller in mybot_controller, check out this tutorial
A robot that effectively interacts with the surrounding environment needs three basic building blocks:
sensors to reveal the surrounding environment
planning to figure out what to do
actuators that make the robot move
We will show how to add sensors to our basic differential drive model. Once equipped with sensors, our robot will be sufficiently equipped to use the ROS navigation stack, which will enable it to move on its own!
Overview: Simulating Sensors
Goal: Add exteroceptive sensors to our simulated differential drive robot
Purpose: Enable algorithms that depend on perceiving the surrounding environment (aka autonomous navigation!!).
References:
Gazebo Plugin Library
Gazebo Camera Tutorial
Repository Code
git clone -b base_sensors https://github.com/richardw05/mybot_ws.git
Video Walkthrough
Simulating a Camera
We only need to add two blocks of code.
mybot_description/urdf/mybot.xacro - Add the camera model
mybot_description/urdf/mybot.gazebo - Add the camera plugin
Launch the model in Gazebo.
roslaunch mybot_gazebo mybot_world.launch
Run a node to view the camera data.
rosrun image_view image_view image:=/mybot/camera1/image_raw
Insert an object in the Gazebo simulator to ensure that it shows up in the camera view.
Launch rviz.
roslaunch mybot_description mybot_rviz.launch
Ensure that camera data can be read in rviz with the topic: /mybot/camera1/image_raw
Simulating a Laser
Once again, only two-ish blocks of code:
mybot_description/urdf/mybot.xacro - Add the laser model
mybot_description/urdf/mybot.gazebo - Add the laser plugin
mybot_description/meshes/hokuyo.dae (optional) - Fancy hokuyo laser model
Launch in Gazebo and rviz to check
Note: Many report that the laser scan does not detect anything. In mybot.gazebo, consider switching libgazebo_ros_gpu_laser.so to libgazebo_ros_laser.so
At this point, we have created a simulated differential drive model that satisfies the necessary pre-requisites for autonomous navigation: the transform tree, odometry, laser scanner, and base controller. We are now ready to make our simulated robot move around on its own by using ROS's navigation stack. In this tutorial, we are going to:
Plug in the ROS navigation stack
Create a map
Navigation with the map
As you can see, these steps are all well documented so we are just putting it all together for our robot.
Overview: Using the ROS Navigation Stack
Goal: Integrate the ROS navigation stack for our simulated robot
Purpose: Enable autonomous navigation so we do not need to remotely control our robot
Resources
Launch file examples using the ROS navigation stack
Tips for organizing large projects
Launch file examples for numerous robot projects
roscd turtlebot_navigation/launch/includes for more launch file examples
Repository Code
git clone -b navigation https://github.com/richardw05/mybot_ws.git
Video Walkthrough
Mapping and Navigation: Turtlebot (Optional)
Learning to make the Turtlebot move autonomously helps to establish a baseline on what kind of behavior to expect. There are basically three steps: creating the map, saving the map, and loading the map.
Creating the Map
Run the following commands below. Use the teleop to move the robot around to create an accurate and thorough map.
In Terminal 1, launch the Gazebo world
roslaunch turtlebot_gazebo turtlebot_world.launch
In Terminal 2, start map building
roslaunch turtlebot_gazebo gmapping_demo.launch
In Terminal 3, launch rviz and set the following parameters: Localmap/Costmap/Topic to /map Globalmap/Costmap/Topic to /map
roslaunch turtlebot_rviz_launchers view_navigation.launch
In Terminal 4, start teleop
roslaunch turtlebot_teleop keyboard_teleop.launch
Saving the Map
In Terminal 5, save the map to some file path
rosrun map_server map_saver -f ~/mybot_ws/test_map
Loading the Map
Close all previous terminals and run the following commands below. Once loaded, use rviz to set navigation waypoints and the robot should move autonomously.
In Terminal 1, launch the Gazebo world
roslaunch turtlebot_gazebo turtlebot_world.launch
In Terminal 2, start map building
roslaunch turtlebot_gazebo amcl_demo.launch map_file:=~/mybot_ws/test_map.yaml
In Terminal 3, launch rviz
roslaunch turtlebot_rviz_launchers view_navigation.launch
Mapping and Navigation: Our Robot
We follow the same steps for our own differential drive robot.
Note, we modified our model to navigate a bit better. For example, we added an extra caster wheel to reduce bouncing laser during abrupt stops. We understand that our robot model requires additional tweaking as it has some weird navigational kinks. Any suggestions would be appreciated. Nevertheless, our model is sufficient for playing with the ROS navigation stack.
Creating the Map
Run the following commands below. Use the teleop to move the robot around to create an accurate and thorough map.
In Terminal 1, launch the Gazebo world
roslaunch mybot_gazebo mybot_world.launch
In Terminal 2, start map building
roslaunch mybot_navigation gmapping_demo.launch
In Terminal 3, launch rviz and set the following parameters:
roslaunch mybot_description mybot_rviz_gmapping.launch
In Terminal 4, start teleop
roslaunch mybot_navigation mybot_teleop.launch
Saving the Map
In Terminal 5, save the map to some file path
rosrun map_server map_saver -f ~/mybot_ws/src/mybot_navigation/maps/test_map
Loading the Map
Close all previous terminals and run the following commands below. Once loaded, use rviz to set navigation waypoints and the robot should move autonomously.
In Terminal 1, launch the Gazebo world
roslaunch mybot_gazebo mybot_world.launch
In Terminal 2, start map building
roslaunch mybot_navigation amcl_demo.launch
In Terminal 3, launch rviz
roslaunch mybot_description mybot_rviz_amcl.launch
Misc
(Optional) Install Turtlebot for a good reference
sudo apt-get install ros-kinetic-turtlebot ros-kinetic-turtlebot-apps ros-kinetic-turtlebot-
interactions ros-kinetic-turtlebot-simulator ros-kinetic-kobuki-ftdi ros-kinetic-ar-track-
alvar-msgs
If ROS cannot find turtlebot_gazebo
rosdep update
Trouble creating a proper map?
Consider lowering the resolution of your map. Change delta value from .05 to .01 in:
/opt/ros/kinetic/share/turtlebot_navigation/launch/includes/gmapping.launch.xml
Spinning a lot when moving autonomously?
Set these parameters to dampen commands sent to the base controller. Add
<param name="move_base/DWAPlannerROS/max_rot_vel" value="1.0"/>
<name="move_base/DWAPlannerROS/acc_lim_theta" value="10.0"/>
in
/opt/ros/kinetic/share/turtlebot_gazebo/launch/amcl_demo.launch
AMCL not loading the map properly?
When you save the map file, the YAML file contains a path to the PGM file; therefore, you cannot move the map files around. If you do, redirect the path in the YAML file.
We take our first steps in running ROS on an actual robot. We will transform linear and rotational target velocity commands into motor commands that will be executed by our differential drive robot's left and right wheels. We use an open loop control system that does not use sensor data (like encoders) as feedback.
Overview: Open-loop Control of a Robot
Goal: Open-loop control of a robot over WiFi via velocity commands
Purpose: Learning open-loop control of a differential drive robot, using ROS's message passing interface over WiFi, and motor control of a physical robot.
Overview:
We have two machines: a host machine running Ubuntu and a robot controlled via a Raspberry Pi. The devices communicate via ROS messages by networking them together to share the same ROS master. ROS nodes for the Raspberry Pi control the motor (left + right) and publish the robot's encoder readings. ROS nodes for the host machine compute velocity targets for the motors and also update the robot's odometry.
Outline:
Setup software for the robot's Raspberry Pi
Setup networking between the host machine and Raspberry Pi
Create ROS nodes for sending motor commands to the robot
Test that the robot correctly follows control commands
Repository Code
git clone -b openloop https://github.com/richardw05/gopigo_ws.git
Video Walkthrough
Compatability: Software
ROS: Kinetic
OS: Ubuntu 16.04
Hardware
Differential Drive Robot: GoPiGo
WiFi USB Stick: Raspberry Pi USB WiFi Dongle
Other differential drive robots will work but may require modifying code to control the motors.
If ordering a new USB WiFi dongle, consider a WiFi device that supports 802.11 ac if you eventually intend to stream camera data in the future.
- (GoPiGo) Software Setup
The robot's Raspberry Pi will interface with the GoPiGo board that controls and communicates with various peripherals (like motors, IR sensors, etc.). We will install ROS and the GoPiGo library on the Raspberry Pi.
Install ROS on the Raspberry PiThe Raspberry Pi uses a custom ROS installation procedure. Please follow the tutorial here: wiki.ros.org/ROSberryPi/Installing ROS Kinetic on the Raspberry Pi.
Install GoPiGo on the Raspberry Pi Instructions for installing the GoPiGo library can be found here: http://www.dexterindustries.com/GoPiGo/programming/python-programming-for-the-raspberry-pi-gopigo/installing-gopigo-python-library
- (Optional) Networking with ROS over WiFi
WiFi networking will allow the host machine and robot to communicate via ROS messages. We first setup the WiFi network and then configure the ROS network.
WiFi Networking Setup.
If you can already SSH between machines, you can skip this.Make sure that both devices are connected to the same subnet (or simply connected to the same router). We will want to SSH from the host machine to the Raspberry Pi.
If you do not know the device's IP address, use nmap to search for devices on the subnet.
nmap -sn 192.168.1.0/24
You should now be able to SSH into the Raspberry Pi.
ssh pi@{IPADDRESS}
If you have trouble SSH'ing into the Raspberry Pi, you may need to add your host's SSH public key into the Raspberry Pi's ~/.ssh/authorized_keys.
As an optional step, IP address are difficult to remember and change periodically so consider using hostname instead
hostname -I
ROS Over WiFi
We setup both devices to communicate over WiFi with ROS by making them share the same ROS master. We choose to run the ROS master on the host machine.On the host machine, start ROS master
roscore
On the Raspberry Pi, configure it to use the host machine's ROS master.
For a temporary solution,
export ROS_MASTER_URI=http://IP_ADDRESS:11311
For a permanent solution, add the line above to your .bashrc file.
Test that this works. On the Raspberry Pi:
rostopic pub /test std_msgs/Float32 5.0
On the host machine:
rostopic echo /test
Resources:
wiki.ros.org/ROS/Tutorials/MultipleMachines
nootrix.com/software/ros-networking
wiki.ros.org/turtlebot/Tutorials/indigo/NetworkConfiguration
- Architecture
In this project, our host machine will read in high level target velocity commands for the robot's body that is translates into individual left and right wheel commands for the robot to execute. As illustrated below, we have two key ROS nodes: diffdrive_controller which is run on the host machine and gopigo_controller which is run on the robot. This separation will allow us to substitute other robot hardware in the future by re-writing the GoPiGo specific ROS nodes.
Given some target velocity for the robot body, we transform it into individual left and right wheel motor commands Given some target velocity for the robot body, we transform it into individual left and right wheel motor commands
Borrowing from the architecture of the ROS navigation stack, we are basically implementing the base controller. Shown below is a more fine-grain break down of the core messages transmitted between ROS nodes.
Flow of ROS messages across ROS nodes. Flow of ROS messages across ROS nodes.
Operationally, we take target linear and rotational velocities and use inverse kinematics to generate tangential wheel velocities for the left and right wheel. The robot transforms these tangential velocity targets into angular velocity targets. These are then transformed into motor command and send to the motors. Details are further discussed in the implementation below.
Given target linear and rotational velocities for the robot body, we compute corresponding angular velocities for the left and right wheels and then compute the corresponding motor commands. Given target linear and rotational velocities for the robot body, we compute corresponding angular velocities for the left and right wheels and then compute the corresponding motor commands.
- Implementation
We implement this project from the ground up. We start by initializing the work space. We then create the GoPiGo controller that interfaces with the motors. On top of this, we build the differential drive controller that will control velocity targets for each wheel.
Initialize a New Workspace
We create a workspace called gopigo_ws. Recall the catkin commands:
mkdir gopigo_ws
catkin init
cd gopigo_ws
mkdir src
catkin build
GoPiGo Controller
The GoPiGo controller receives target velocities and transforms them into motor commands for the robot to execute. For each wheel, we take the target tangential velocity (how fast to move forwards or backwards), transform it into an angular velocity (how fast to spin the wheel), and then transform it into motor commands (direction and speed of the motor).
Define the following:
v: tangential velocity
w: angular velocity
R: wheel radius
To relate tangential velocity with angular velocity,
For the motors, we can specify motor direction (0 backwards; 1 forwards) and motor speed (0 slowest; 255 fastest). Given that motor speeds are unitless, we will create a mapping from target angular velocities to motor commands.
A few factors to consider: there is a minimum motor command speed below which the wheels simply do not move, there is a maximum motor command speed above which we cannot increase further, and we must discretely specify the direction in which the motor spins.
Mapping angular velocity wheel targets (rad/s) to motor commands (0, 255)
Mapping angular velocity wheel targets (rad/s) to motor commands (0, 255)
As depicted above, we create a simple piecewise linear mapping from angular velocities to motor command as depicted below. We interpolate the angular velocity when given minimum and maximum motor speeds. Further calibration could help create a more fine-grain mapping in the future.
Resources:
To understand the equations: http://www8.cs.umu.se/kurser/5DV122/HT13/material/Hellstrom-ForwardKinematics.pdf
Differential Drive Controller
The differential drive controller takes /cmd_vel messages and computes tangential velocities for left and right wheels. It is not entirely obvious how we go from linear and rotational velocities (v, w) for the robot's body to individual wheel velocities. Fortunately, this is a well-studied problem known as inverse kinematics.
Define the following inputs:
: target linear velocity of the robot body
: target rotational velocity of the robot body
L: distance between robot wheels
Define the following outputs:
: tangential velocity of right wheel
: tangential velocity of left wheel
We know the following equations hold (please see resources below for further explanations). The basic intuition is that the top equation reflects the situation when the robot is moving straight forward while the bottom equation reflects the robot rotating in place:
Interestingly, we can solve these equations to get:
I encourage you to try and solve for the wheel velocities. The results make sense if you consider the edge case. Consider if we just want the robot to move straight forward so and . You will unsurprisingly get identical tangential wheel velocities causing the robot to move straight:
Consider if we only set angular rotation so and . You will also unsurprisingly get opposite tangential wheel velocities causing the robot to rotate in place.
Resources:
Forward Kinematics Presentation: MotionAndSensing.pptx
Forward Kinematics Math: http://www8.cs.umu.se/kurser/5DV122/HT13/material/Hellstrom-ForwardKinematics.pdf
Full Robot Implementation Tutorial: http://apprize.info/programming/ros/10.html
Base Controller Example: http://wiki.ros.org/pr2_controllers/Tutorials/Using the robot base controllers to drive the robot
Turtlebot Control Code Examples: https://github.com/markwsilliman/turtlebot
Differential Drive Kinematics: https://chess.eecs.berkeley.edu/eecs149/documentation/differentialDrive.pdf
Differential Drive with Caster: http://robotics.stackexchange.com/questions/8911/how-to-find-kinematics-of-differential-drive-caster-robot
Excellent Chapter on Kinematics: http://www.cs.cmu.edu/~rasc/Download/AMRobots3.pdf
With gopigo_controller and diffdrive_controller, we now have open-loop control of the robot. By publishing /cmd_vel messages, we will be able to control the robot's movements.
- Launcher
We introduce two ROS nodes that host several convenient launch files: nav_behaviors and gopigo_description. To launch our open loop control of an actual robot, on the host machine:
roslaunch nav_behaviors nav_behaviors.launch
On the Raspberry Pi:
roslaunch gopigo_description gopigo_interface.launch
Now that our ROS nodes are listening for velocity commands, run one of these two shell scripts in ~/gopigo_ws/:
./nav_forward.py ./nav_rotate.py
The result should be either the robot moves forward or rotates in place, respectively. If this happens, you have successfully created an open loop controller for your robot!
If you do not have access to a GoPiGo robot, you can still test the code:
Different Hardware
If you have a different type of robot, then you will need to edit gopigo_controller/src/gopigo_controller.py. Search for the gopigo_on flag and edit to transmit commands for your motors. You may need to edit the parameter values in gopigo_description/launch/gopigo_interface.launch
No Hardware
You can still test the controller in simulation. Go to gopigo_description/launch/gopigo_interface.launch and set the gopigo_on flag to False. This will eliminate any commands sent to the motors. You can then check that the following ROS messages make sense:
rostopic echo /cmd_vel rostopic echo /lwheel_tangent_vel_target rostopic echo /rwheel_tangent_vel_target rostopic echo /lwheel_angular_vel_control rostopic echo /rwheel_angular_vel_control rostopic echo /lwheel_angular_vel_motor rostopic echo /rwheel_angular_vel_motor
If you execute nav_forward.py, then expect to see matching values for corresponding wheels.
If you execute nav_rotate.py, then expect to see opposite values for corresponding wheels.
In this section, we focus on using wheel encoder sensor data for two purposes:
Closed Loop PID Control
While open loop control is simple to implement, it does not always work reliably when running on an actual robot because a combination of environmental (i.e. friction from floor) and intrinsic factors (i.e. differences between left and right motors) play a role.
Closed loop control uses sensor data (i.e. wheel encoders) to adjust motor control through feedback (i.e. error as the difference between target and measured). PID control is a popular closed loop controller that is simple to implement.
Odometry w/Dead Reckoning
Odometry is valuable because it is: 1) a necessary input for using the ROS navigation stack in the future and 2) allows us to see the robot's trajectory
We combine wheel encoder readings from both wheels (dead reckoning) to estimate the robot's position over time (odometry).
Overview: Closed-loop Control of a Robot
Goal: Use of wheel encoder sensor data for closed-loop PID control and estimating robot odometry
Purpose: Learning to incorporate feedback from sensor data to improve robot control and state estimation
Outline:
Read robot wheel encoders
PID control of both wheels
Dead reckoning to estimate robot odometry
Repository Code
git clone -b closedloop https://github.com/richardw05/gopigo_ws.git
Video Walkthrough
Architecture
Once again, we have a host machine and a robot connected through WiFi. As illustrated below, we add two ROS nodes. gopigo_state_updater is run on the robot and transforms wheel encoder data into angular wheel velocity estimates. We extend gopigo_controller to use these encoder readings for PID control. diffdrive_odom is run on the host machine and uses encoder estimates to perform dead reckoning.
Wheel encoder readings are transformed into angular wheel velocities, which are used for both PID control and dead reckoning Wheel encoder readings are transformed into angular wheel velocities, which are used for both PID control and dead reckoning
Shown below are the ROS messages transmitted between these ROS nodes.
Flow of ROS messages from wheel encoders to odometry Flow of ROS messages from wheel encoders to odometry
Operationally, we first estimate angular wheel velocities from wheel encoder readings. These are used for the PID controller which will reduce the error between target and measurement velocities. Next, we perform dead reckoning by using forward kinematics to recover the robot's linear and rotational velocities, which are used to estimate robot odometry.
Given wheel encoder readings, we can compute angular velocities for PID control and dead reckoning Given wheel encoder readings, we can compute angular velocities for PID control and dead reckoning
Implementation
We will turn our open-loop controller into a closed-loop controller by creating a new gopigo_state_updater node and editing the gopigo_controller node. We then perform dead reckoning in a new diffdrive_odom node.
GoPiGo State Updater
Wheel encoders measure how much each wheel actually turned and estimate their actual angular velocity. Later on, our PID controller will correct for mismatches between target and actual wheel velocity. First, the gopigo_state_updater node must transform wheel encoder readings to angular velocity.
Interestingly, the GoPiGo's wheel encoders returns the number of centimeters traversed; in contrast, most encoders return the number of ticks traversed. Either measurement type can be transformed into angular velocities. For our wheel encoder data, we can compute the radians traversed as a proportion of the encoder distance traversed. We then compute the anuglar velocity by dividing over the time taken.
Define the following:
d: change in distance from encoders
R: robot wheel radius
: time to move distance read from encoders
w: angular velocity of wheel
To compute the angular velocity, we first use the encoder distance to compute the proportion of the wheel's circumference traversed. We multiply this by the total radians for one rotation (obviously 2 Pi). Finally, we divide by the time taken to move that distance.
This equation can be simplified further resulting in the wheel's angular velocity. This primary purpose of the gopigo_state_updater node is to publish these angular wheel velocities.
GoPiGo Controller w/PID Control
Once we have a measured angular wheel velocity, we can apply a PID controller to correct for the error (target angular wheel velocity - measured angular wheel velocity). Recall that the target angular wheel velocity for each wheel is given by the diffdrive_controller node. Intuitively, the idea is that if our motors are not moving at our target velocity (too slow or too fast), then we should adjust our motor control commands (faster or slower) to correct for these differences. For example, if floor friction causes the wheels to spin slower than expected, then we need to give a higher motor control command to compensate.
PID controllers are well-studied so let us just provide a bit of intuition. It is often easier to envision a one-dimensional example like a car's cruise control. The state of the system is the robot's current velocity. The cruise control sets a target velocity. The control command adjusts the gas pedal. If moving too slow, then push harder on the gas pedal and vice versa. The measurement is the odometer that tells us the vehicle's speed. The error is the difference between the measurement and target. Adjusting the error to 0, means we will achieve our target velocity. In most physical systems, we do not want to instantly jump from 0 to 100 mph so we need to compute control commands that gradually adjust car speed. A PID controller basically a sum of three separate control adjustment techniques that cover different situations:
(P)roportional - control command is a proportion of the current error (e.g. push enough on gas pedal to slowly ramp up to a target velocity)
(I)ntegral - control command is a proportion of past errors (e.g. more gas if we have had large error for a while)
(D)ifferential - control command is a proportion of the future change in error (e.g. react quickly to changes in error)
Notice that the key difference is the time frame considered (past, current, and future). It is actually not necessary to use all three of the control adjustments so feel free to adjust as needed. Please see these resources for more details.
Resources:
https://www.coursera.org/learn/mobile-robot
https://en.wikipedia.org/wiki/PID_controller
In addition to understanding PID control conceptually, we also need to implement it. Whereas the equations are given in a continuous representation, our computing platforms must contend with a discrete representation. For proportional control, we only consider current measurements so it is simply a proportion of the current error (target minus measured). For integral control, we can compute a rolling sum of errors. For differential control, we just compute the difference between the current and previous error.
Hopefully, the pid_control function shows how simple it is to implement a PID controller. Of course, the key will be experimenting to determine the proper gains (Kp, Ki, Kd) to use.
Differential Drive Odometry
Through dead reckoning, we can combine wheel encoder sensor data from both wheels to not only recover the robot's linear and rotational velocity (in a perfect system, we would recover the input /cmd_vel) but also estimate its position over time. Forward kinematics refers to the sequence of equations (based on models of how the physical system reacts to given inputs) to transform sensor data into robot odometry. Our kinematic equations assume no wheel slippage, drag, and other factors that are inherently difficult to accurately predict in realistic environments so odometry estimates are inherently not going to perfectly match reality.
Let us define the following inputs:
L: distance between robot's wheels
w_l: angular velocity of left wheel
w_r: angular velocity of right wheel
: change in time for update
x: robot's previous x position
y: robot's previous y position
: robot's previous orientation
Define the following outputs:
R: radius of instantaneous center of curvature
v_l: tangential velocity of left wheel
v_r: tangential velocity of right wheel
v_c: robot's linear velocity
w_c: robot's rotational velocity
: robot's current x position
: robot's current y position
: robot's current orientation
We use simple forward kinematic equations that assume one of two scenarios when our differential drive robot is moving: either the robot is moving straight forward or moving around an "instantaneous center of curvature (ICC)".
Our forward kinematics model consider two instantaneous velocity scenarios: either the robot moves straight forward or the robot turns along some circle.
Our forward kinematics model consider two instantaneous velocity scenarios: straight forward or turn along some circle.
Moving Straight Forward
If moving forward, the dynamics are simple. Notice that the robot's angular velocity is 0 because both wheels move at the same speed:
Naturally, we can update the robot's pose (global position and orientation) as:
Rotating Around a Circle
We first compute the radius of the circle around which the robot is following (at this instant). This allows us to compute the robot's rotational velocity.
We can then compute the center of this circle.
We follow by updating the robot's pose.
We publish the compute robot's global pose in the world to both the robot's tf transform and odometry messages. The odometry message also includes velocity information, which of course means that the odometry message includes more information about the robot's state (position, orientation, linear velocity, rotational velocity).
Resources:
ROS Odometry Implementation: http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom
ROS Odometry in Python: answers.ros.org/question/79851/python-odometry
Launching
We are going to use wheel encoder sensor data for 1) PID feedback control and 2) estimating the robot's odometry.
We are using the same launch files nav_behaviors.launch and gopigo_interface.launch. We edited them to run additional ROS nodes (diffdrive_odom for nav_behaviors.launch and gopigo_state_updater for gopigo_interface.launch). Try running the following scenarios.
Odometry in Simulation
As a basic sanity check, we want to ensure that our forward kinematic transformations are correct by showing that we can recover the original cmd_vel message.
In gopigo_interface.launch, set the following for all:
<param name="gopigo_on" value="False"/>
Now run the launch files. On the host machine:
roslaunch nav_behaviors nav_behaviors.launch
On the Raspberry Pi:
roslaunch gopigo_description gopigo_interface.launch
Print the following messages to verify that our computations are correct:
rostopic echo /cmd_vel
rostopic echo /lwheel_tangent_vel_target
rostopic echo /rwheel_tangent_vel_target
rostopic echo /lwheel_angular_vel_target
rostopic echo /rwheel_angular_vel_target
rostopic echo /lwheel_angular_vel_control
rostopic echo /rwheel_angular_vel_control
rostopic echo /lwheel_angular_vel_motor
rostopic echo /rwheel_angular_vel_motor
rostopic echo /lwheel_angular_vel_enc
rostopic echo /rwheel_angular_vel_enc
rostopic echo /lwheel_tangent_vel_enc
rostopic echo /rwheel_tangent_vel_enc
rostopic echo /cmd_vel_enc
Send target /cmd_vel messages and see how the messages change.
./nav_forward.sh
./nav_rotate.sh
When we set gopigo_on to False, the gopigo_state_updater node will basically copy over the target angular velocity (in essence, simulating as if the encoder readings perfectly match our given model). Check that /cmd_vel and /cmd_vel_enc match.
Odometry on the GoPiGo
Let us compute odometry using actual wheel encoder sensor data.
In gopigo_interface.launch, set the following for all:
<param name="gopigo_on" value="True"/>
Run the same launch files listed above. Also print out the same ROS messages.
Verify that /cmd_vel and /cmd_vel_enc are similar. This should tell you how well reality matches our expected model.
PID Control on the GoPiGo
PID control is supposed to ensure that the wheels are rotating at the expected speed.
In gopigo_interface.launch, set
<param name="pid_on" value="True"/>
Try running the robot on surface with different friction and verify that the robot more reliably travels the same distance when PID control is turned on.