Skip to content

issues Search Results · repo:TianhongDai/hindsight-experience-replay language:Python

Filter by

29 results
 (101 ms)

29 results

inTianhongDai/hindsight-experience-replay (press backspace or delete to remove)

Hi, thank you for sharing the implementations! I was encountered with memory problems when using images (128*128) as observations in my custom environment. I think the image observations should be first ...
  • Leong1230
  • Opened 
    on Jul 5, 2024
  • #32

Hello, First of all, thank you for providing the DDPG+HER code; it has been a great help. However, I have some basic questions as I am just starting to learn about reinforcement learning. After adapting ...
  • binbinyouli12
  • 2
  • Opened 
    on Jun 13, 2024
  • #31

Hi, thanks for the interesting work. I am not sure how you set the goal for the pickup or pushing task, because the goal is the position of the object. As I understand, the goal of all the failure trials ...
  • gautica
  • 7
  • Opened 
    on Jun 19, 2023
  • #30

i tried training from scratch, only FetchReach perform at 1.0 success rate . while other cannot go beyond, 0.3 to 0.4.
  • faheem-khaskheli
  • 1
  • Opened 
    on Jun 19, 2023
  • #29

The required versions of different libs in readme are outdated, are there some newer versions available? Thanks!
  • zichunxx
  • 2
  • Opened 
    on Apr 25, 2023
  • #28

The policy for the FetchReach task seems to converge much faster than in the original report, considering the fact that the given command in the ReadMe should result in 10 * 2 * 50 = 1000 timesteps per ...
  • ArshT
  • 2
  • Opened 
    on Mar 28, 2023
  • #27

Hi, Thank you for sharing the code. I ve tried to run the code as suggested in readme. mpirun -np 8 python -u train.py --env-name= FetchPush-v1 2 1 | tee push.log But the success rate is much lower ...
  • root221
  • 2
  • Opened 
    on Jun 18, 2022
  • #26

Hi, thanks for sharing the code. I m wondering if I can train the DDPG agent in the handmanipulate env since they are from the same robotics env group.
  • LingfengTao
  • 1
  • Opened 
    on May 18, 2022
  • #25

thanks a lot! tihs project works well with my own robotic environment. But I am confused about her.her_sampler.sample_her_transitions, because it s quite different from the strategy future as I think. ...
  • whynpt
  • 4
  • Opened 
    on Mar 30, 2022
  • #24

In the Fetchxxx-Env, the env s distance_threshold is set as 0.05 default to determine whether a task is completed successfully. I try to modify it by using env.distance_threshold = 0.01(or other value) ...
  • QAbot-zh
  • 1
  • Opened 
    on Mar 22, 2022
  • #23
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Restrict your search to the title by using the in:title qualifier.
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Restrict your search to the title by using the in:title qualifier.
Issue search results · GitHub