-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help to solve a runtime error #6
Comments
If it is input and output related, it might be you have some errors in setting up these two options: |
Hi Mengtian, Thanks much for your reply. I tried a couple of changes, but it still does not work. Specifically, the errors occur at: pred_fake = self.netD(fake_AB.detach()) (line 93 in your pix2pix_model.py in the models folder), and at: pred_real = self.netD(real_AB) (line 102 in pix2pix_model.py). The shape of 'fake_AB' is (1, 6, 256, 256) and the shape of 'real_AB' is (1, 4, 256, 256). I think the problem is no matter what discriminator we choose, the first conv layer needs to handle both 6 input channels and 4 input channels simultaneously, which might be impossible. Sorry to create more work for you, but I look forward to more suggestions or solutions. Thank you in advance. |
Did you set |
Change the output channel to 1 solves my problem!! You really helped me a lot, and thank you so much!! |
When I run the code, I keep getting the following error. Any suggestions would be appreciated!
Traceback (most recent call last):
File "/root/PycharmProjects/Photo-Sketching/train.py", line 68, in
main()
File "/root/PycharmProjects/Photo-Sketching/train.py", line 39, in main
model.optimize_parameters() # error
File "/root/PycharmProjects/Photo-Sketching/models/pix2pix_model.py", line 139, in optimize_parameters
self.backward_D()
File "/root/PycharmProjects/Photo-Sketching/models/pix2pix_model.py", line 106, in backward_D
pred_real = self.netD(real_AB) # error------------------------------------------------
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/root/PycharmProjects/Photo-Sketching/models/networks.py", line 423, in forward
return self.model(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 6, 4, 4], expected input[1, 4, 256, 256] to have 6 channels, but got 4 channels instead
The text was updated successfully, but these errors were encountered: