Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent not learning for maze environment with CRAR #78

Closed
parthchadha opened this issue Jan 1, 2020 · 2 comments
Closed

Agent not learning for maze environment with CRAR #78

parthchadha opened this issue Jan 1, 2020 · 2 comments

Comments

@parthchadha
Copy link

I am trying to reproduce the results of the CRAR agent in the maze environment and am observing that the agent's test reward is not improving at all. It stays at about -5 for all the 250 epochs. Can you please point me to the experiment settings that can reproduce the results?

@VinF
Copy link
Owner

VinF commented Jan 2, 2020

Hi Parth,

Thanks for reaching out. The value for the learning rate of the representation loss in CRAR is set to a different value for that experiment:

K.set_value(self.diff_s_s_.optimizer.lr, self._lr/5.) # /5. for simple laby or simple catcher; /1. for distrib of laby

It must be set at the value self._lr instead of self._lr/5. Please let me know if you still run into problems.

Apologies for this hard coded value.

@parthchadha
Copy link
Author

Hi Vincent,
Thanks for pointing this out. I am able to reproduce the results now!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants