Skip to content

Commit

Permalink
Merge branch 'main' into Noetic
Browse files Browse the repository at this point in the history
  • Loading branch information
reiniscimurs authored Aug 9, 2022
2 parents 8e410ab + d51438a commit dc621d7
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 1 deletion.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# DRL-robot-navigation


Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles. Obstacles are detected by laser readings and a goal is given to the robot in polar coordinates. Trained in ROS Gazebo simulator with PyTorch. Tested with ROS Noetic on Ubuntu 20.04 with python 3.8.10 and pytorch 1.10.

Training example:
Expand All @@ -8,7 +9,9 @@ Training example:
</p>


**The implementation of this method has been accepted and published in IEEE RA-L:**

**The implementation of this method has been accepted for ICRA 2022 and published in IEEE RA-L:**


Some more information about the implementation is available [here](https://ieeexplore.ieee.org/document/9645287?source=authoralert)

Expand Down
7 changes: 7 additions & 0 deletions TD3/velodyne_env.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,13 @@ def step(self, act):
self.vel_pub.publish(vel_cmd)

target = False

# Publish the robot action
vel_cmd = Twist()
vel_cmd.linear.x = act[0]
vel_cmd.angular.z = act[1]
self.vel_pub.publish(vel_cmd)

rospy.wait_for_service('/gazebo/unpause_physics')
try:
self.unpause()
Expand Down

0 comments on commit dc621d7

Please sign in to comment.