-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtimeError in pytorch 0.4.1 #4
Comments
Hello, I also get such "RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation" when backpropagating the SmoothKNN function. Did you find a solution ? Were you using SmoothKNN in the "loss_fn" ? Thanks |
Thanks for the bug report! Does this also happen for |
Thank you for the quick reply ! I detail a bit more because I did encounter different behavior for k=1 or k>1 (I tried because it is mentioned that the operation is differently done at k=1) The backward I am doing is on two objectives, computed for the same optimization step. in the beginning, I did (trans_score+CC_rec_loss).backward() and it would lead to different errors then I tried to backward them separately (but still doing a single optimization step after), first doing the trans_score.backward() alone ; there I get the same error for k=1 or k>1 which is RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation let me know if I may give more informations to help fixing the issue if anything can be done thanks ! ps: I was also opening an other issue, I guess it relates to that one even though I am not sure he is using SmoothKNN ; still I was telling that the setting works perfect with MMD but not with SmoothKNN (only difference is in setting a True cuda boolean and a k value for SmoothKNN ; about that, MMD does not have a cuda argument but works with GPU whereas SmoothKNN has a cuda argument, that I set to True but there may be an additional issue with cuda tensors mixed with cpu tensors, that I didn't understand cf. in the beginning with k>1) |
Hi ! I continue exerimenting with the MMD criterion, two questions for which you may have some recommendations: thanks ! |
Dear Josipd, I have been doing some checks of gradient calculations/propagations and it seems that when I compute a loss criterion using the torch_two_sample.statistics_diff.MMDStatistic in between a generated batch and a batch sampled from a target data distribution, it does not accumulate any gradient in the generative model .. I have a code where I separately backward my different criterions, all accumulate gradients as expected except the MMD ; I can send you through, I don't understand why it does not work whereas this function should support backpropagation right ? |
pytorch 0.4.1
RuntimeError Traceback (most recent call last)
in ()
37 loss = loss_fn(Variable(batch), generator(noise), alphas=alphas)
38 print(" loss is ", batch)
---> 39 loss.backward()
40 print(" backward")
41 optimizer.step()
~/Downloads/venv/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to
False
.92 """
---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):
~/Downloads/venv/lib/python3.6/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
---> 90 allow_unreachable=True) # allow_unreachable flag
91
92
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
The text was updated successfully, but these errors were encountered: