diff --git a/README.md b/README.md index 909b009d..c85c37ec 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,6 @@ # DRL-robot-navigation + Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles. Obstacles are detected by laser readings and a goal is given to the robot in polar coordinates. Trained in ROS Gazebo simulator with PyTorch. Tested with ROS Noetic on Ubuntu 20.04 with python 3.8.10 and pytorch 1.10. Training example: @@ -8,7 +9,9 @@ Training example:

-**The implementation of this method has been accepted and published in IEEE RA-L:** + +**The implementation of this method has been accepted for ICRA 2022 and published in IEEE RA-L:** + Some more information about the implementation is available [here](https://ieeexplore.ieee.org/document/9645287?source=authoralert) diff --git a/TD3/velodyne_env.py b/TD3/velodyne_env.py index f53d413f..90d1122c 100644 --- a/TD3/velodyne_env.py +++ b/TD3/velodyne_env.py @@ -188,6 +188,13 @@ def step(self, act): self.vel_pub.publish(vel_cmd) target = False + + # Publish the robot action + vel_cmd = Twist() + vel_cmd.linear.x = act[0] + vel_cmd.angular.z = act[1] + self.vel_pub.publish(vel_cmd) + rospy.wait_for_service('/gazebo/unpause_physics') try: self.unpause()