Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

with torch.enable_grad() #5

Open
gk966988 opened this issue Nov 16, 2019 · 1 comment
Open

with torch.enable_grad() #5

gk966988 opened this issue Nov 16, 2019 · 1 comment

Comments

@gk966988
Copy link

Hi,
in forward( ) function, in gain.py
if remove the code " with torch.enable_grad():" what will it happen? Does it influence the final result?

@paganpasta
Copy link

@gk966988 I am confused as well. To my understanding with torch.enable_grad() should be removed and any other calls to change the mode to train or otherwise. If you are calling forward in eval or train, it still allows you to compute gradients. Moreover, to preserve consistency for validation inference, the mode should not be switched from eval to train. Only thing to make sure is not using with torch.no_grad() header during testing.

But would love to know from @ngxbac the reasoning behind his choice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants