-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LogP Example: "TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list" #38
Comments
Gustavo, I guess your data is a list, but the model expects a tensor.
…On Mon, Oct 19, 2020 at 4:48 PM Gustavo Seabra ***@***.***> wrote:
Hi,
I'm re-running the LogP example using current version of PyTorch, and the
execution stops in the reinforcement loop due to a TypeError, as below. Are
you aware of any changes in PyTorch that could be responsible for this? Is
there a solution for it?
Thanks!
for i in range(n_iterations):
for j in trange(n_policy, desc='Policy gradient...'):
cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
rewards.append(simple_moving_average(rewards, cur_reward))
rl_losses.append(simple_moving_average(rl_losses, cur_loss))
plt.plot(rewards)
plt.xlabel('Training iteration')
plt.ylabel('Average reward')
plt.show()
plt.plot(rl_losses)
plt.xlabel('Training iteration')
plt.ylabel('Loss')
plt.show()
smiles_cur, prediction_cur = estimate_and_update(RL_logp.generator,
my_predictor,
n_to_generate)
print('Sample trajectories:')
for sm in smiles_cur[:5]:
print(sm)
with the error below:
Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]./release/data.py:98: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(tensor).cuda()
Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-37-7a3a9698cf0c> in <module>
1 for i in range(n_iterations):
2 for j in trange(n_policy, desc='Policy gradient...'):
----> 3 cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
4 rewards.append(simple_moving_average(rewards, cur_reward))
5 rl_losses.append(simple_moving_average(rl_losses, cur_loss))
~/work/li/leadopt/generator/ReLeaSE/release/reinforcement.py in policy_gradient(self, data, n_batch, gamma, std_smiles, grad_clipping, **kwargs)
117 reward = self.get_reward(trajectory[1:-1],
118 self.predictor,
--> 119 **kwargs)
120
121 # Converting string of characters into tensor
<ipython-input-33-a8c049e9e937> in get_reward_logp(smiles, predictor, invalid_reward)
1 def get_reward_logp(smiles, predictor, invalid_reward=0.0):
----> 2 mol, prop, nan_smiles = predictor.predict([smiles])
3 if len(nan_smiles) == 1:
4 return invalid_reward
5 if (prop[0] >= 1.0) and (prop[0] <= 4.0):
~/work/li/leadopt/generator/ReLeaSE/release/rnn_predictor.py in predict(self, smiles, use_tqdm)
62 self.model[i]([torch.LongTensor(smiles_tensor).cuda(),
63 torch.LongTensor(length).cuda()],
---> 64 eval=True).detach().cpu().numpy())
65 prediction = np.array(prediction).reshape(len(self.model), -1)
66 prediction = np.min(prediction, axis=0)
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/work/source/repos/OpenChem/openchem/models/Smiles2Label.py in forward(self, inp, eval)
41 else:
42 self.train()
---> 43 embedded = self.Embedding(inp)
44 output, _ = self.Encoder(embedded)
45 output = self.MLP(output)
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/work/source/repos/OpenChem/openchem/modules/embeddings/basic_embedding.py in forward(self, inp)
7
8 def forward(self, inp):
----> 9 embedded = self.embedding(inp)
10 return embedded
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1812 # remove once script supports set_grad_enabled
1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1815
1816
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#38>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA>
.
|
Well... I'm no wiz, but that much I had figured out.
The point is that there's no such thing as "my data": I'm just running the LogP Jupyter notebook from the git repo.
I assume it did work fine when it was created, with PyTorch 0.4. But maybe there was some change in PyTorch internals?
--
Gustavo Seabra
________________________________
From: Olexandr Isayev <[email protected]>
Sent: Monday, October 19, 2020 6:51:58 PM
To: isayev/ReLeaSE <[email protected]>
Cc: de Miranda Seabra, Gustavo <[email protected]>; Author <[email protected]>
Subject: Re: [isayev/ReLeaSE] LogP Example: "TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list" (#38)
[External Email]
Gustavo, I guess your data is a list, but the model expects a tensor.
On Mon, Oct 19, 2020 at 4:48 PM Gustavo Seabra ***@***.***> wrote:
Hi,
I'm re-running the LogP example using current version of PyTorch, and the
execution stops in the reinforcement loop due to a TypeError, as below. Are
you aware of any changes in PyTorch that could be responsible for this? Is
there a solution for it?
Thanks!
for i in range(n_iterations):
for j in trange(n_policy, desc='Policy gradient...'):
cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
rewards.append(simple_moving_average(rewards, cur_reward))
rl_losses.append(simple_moving_average(rl_losses, cur_loss))
plt.plot(rewards)
plt.xlabel('Training iteration')
plt.ylabel('Average reward')
plt.show()
plt.plot(rl_losses)
plt.xlabel('Training iteration')
plt.ylabel('Loss')
plt.show()
smiles_cur, prediction_cur = estimate_and_update(RL_logp.generator,
my_predictor,
n_to_generate)
print('Sample trajectories:')
for sm in smiles_cur[:5]:
print(sm)
with the error below:
Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]./release/data.py:98: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(tensor).cuda()
Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-37-7a3a9698cf0c> in <module>
1 for i in range(n_iterations):
2 for j in trange(n_policy, desc='Policy gradient...'):
----> 3 cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
4 rewards.append(simple_moving_average(rewards, cur_reward))
5 rl_losses.append(simple_moving_average(rl_losses, cur_loss))
~/work/li/leadopt/generator/ReLeaSE/release/reinforcement.py in policy_gradient(self, data, n_batch, gamma, std_smiles, grad_clipping, **kwargs)
117 reward = self.get_reward(trajectory[1:-1],
118 self.predictor,
--> 119 **kwargs)
120
121 # Converting string of characters into tensor
<ipython-input-33-a8c049e9e937> in get_reward_logp(smiles, predictor, invalid_reward)
1 def get_reward_logp(smiles, predictor, invalid_reward=0.0):
----> 2 mol, prop, nan_smiles = predictor.predict([smiles])
3 if len(nan_smiles) == 1:
4 return invalid_reward
5 if (prop[0] >= 1.0) and (prop[0] <= 4.0):
~/work/li/leadopt/generator/ReLeaSE/release/rnn_predictor.py in predict(self, smiles, use_tqdm)
62 self.model[i]([torch.LongTensor(smiles_tensor).cuda(),
63 torch.LongTensor(length).cuda()],
---> 64 eval=True).detach().cpu().numpy())
65 prediction = np.array(prediction).reshape(len(self.model), -1)
66 prediction = np.min(prediction, axis=0)
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/work/source/repos/OpenChem/openchem/models/Smiles2Label.py in forward(self, inp, eval)
41 else:
42 self.train()
---> 43 embedded = self.Embedding(inp)
44 output, _ = self.Encoder(embedded)
45 output = self.MLP(output)
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/work/source/repos/OpenChem/openchem/modules/embeddings/basic_embedding.py in forward(self, inp)
7
8 def forward(self, inp):
----> 9 embedded = self.embedding(inp)
10 return embedded
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1812 # remove once script supports set_grad_enabled
1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1815
1816
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#38><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=1UyQgH6hJ24o9H9c5HBPkIZTTgACDhz8kobR1mFM3yI&e=>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=cZfrAwvPGWJtxTjQXpNopDjE-6YRZWD6cyG59qU0y9U&e=>
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-23issuecomment-2D712484495&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=gRSQ-wfSd5aALN-7K1HnwMMTzxf0A6KsZEqKnW0OxEw&e=>, or unsubscribe<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AKBAWSFN5DXOXZWQJ4PDCADSLS7I5ANCNFSM4SWVLIVA&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=1Jpulkaf66vxDvRV9S4I6Dh3GvJgMBjbwZ5v5UzGI6U&e=>.
|
Oh makes sense. Yeah, it does not work with the latest pytorch, you still
have to run it with the old one.
On Mon, Oct 19, 2020 at 8:10 PM Gustavo Seabra <[email protected]>
wrote:
… Well... I'm no wiz, but that much I had figured out.
The point is that there's no such thing as "my data": I'm just running the
LogP Jupyter notebook from the git repo.
I assume it did work fine when it was created, with PyTorch 0.4. But maybe
there was some change in PyTorch internals?
--
Gustavo Seabra
________________________________
From: Olexandr Isayev ***@***.***>
Sent: Monday, October 19, 2020 6:51:58 PM
To: isayev/ReLeaSE ***@***.***>
Cc: de Miranda Seabra, Gustavo ***@***.***>; Author <
***@***.***>
Subject: Re: [isayev/ReLeaSE] LogP Example: "TypeError: embedding():
argument 'indices' (position 2) must be Tensor, not list" (#38)
[External Email]
Gustavo, I guess your data is a list, but the model expects a tensor.
On Mon, Oct 19, 2020 at 4:48 PM Gustavo Seabra ***@***.***>
wrote:
> Hi,
>
> I'm re-running the LogP example using current version of PyTorch, and the
> execution stops in the reinforcement loop due to a TypeError, as below.
Are
> you aware of any changes in PyTorch that could be responsible for this?
Is
> there a solution for it?
>
> Thanks!
>
> for i in range(n_iterations):
> for j in trange(n_policy, desc='Policy gradient...'):
> cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
> rewards.append(simple_moving_average(rewards, cur_reward))
> rl_losses.append(simple_moving_average(rl_losses, cur_loss))
>
> plt.plot(rewards)
> plt.xlabel('Training iteration')
> plt.ylabel('Average reward')
> plt.show()
> plt.plot(rl_losses)
> plt.xlabel('Training iteration')
> plt.ylabel('Loss')
> plt.show()
>
> smiles_cur, prediction_cur = estimate_and_update(RL_logp.generator,
> my_predictor,
> n_to_generate)
> print('Sample trajectories:')
> for sm in smiles_cur[:5]:
> print(sm)
>
> with the error below:
>
> Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]./release/data.py:98:
UserWarning: To copy construct from a tensor, it is recommended to use
sourceTensor.clone().detach() or
sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
> return torch.tensor(tensor).cuda()
> Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]
>
>
---------------------------------------------------------------------------
> TypeError Traceback (most recent call last)
> <ipython-input-37-7a3a9698cf0c> in <module>
> 1 for i in range(n_iterations):
> 2 for j in trange(n_policy, desc='Policy gradient...'):
> ----> 3 cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
> 4 rewards.append(simple_moving_average(rewards, cur_reward))
> 5 rl_losses.append(simple_moving_average(rl_losses, cur_loss))
>
> ~/work/li/leadopt/generator/ReLeaSE/release/reinforcement.py in
policy_gradient(self, data, n_batch, gamma, std_smiles, grad_clipping,
**kwargs)
> 117 reward = self.get_reward(trajectory[1:-1],
> 118 self.predictor,
> --> 119 **kwargs)
> 120
> 121 # Converting string of characters into tensor
>
> <ipython-input-33-a8c049e9e937> in get_reward_logp(smiles, predictor,
invalid_reward)
> 1 def get_reward_logp(smiles, predictor, invalid_reward=0.0):
> ----> 2 mol, prop, nan_smiles = predictor.predict([smiles])
> 3 if len(nan_smiles) == 1:
> 4 return invalid_reward
> 5 if (prop[0] >= 1.0) and (prop[0] <= 4.0):
>
> ~/work/li/leadopt/generator/ReLeaSE/release/rnn_predictor.py in
predict(self, smiles, use_tqdm)
> 62 self.model[i]([torch.LongTensor(smiles_tensor).cuda(),
> 63 torch.LongTensor(length).cuda()],
> ---> 64 eval=True).detach().cpu().numpy())
> 65 prediction = np.array(prediction).reshape(len(self.model), -1)
> 66 prediction = np.min(prediction, axis=0)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in
_call_impl(self, *input, **kwargs)
> 720 result = self._slow_forward(*input, **kwargs)
> 721 else:
> --> 722 result = self.forward(*input, **kwargs)
> 723 for hook in itertools.chain(
> 724 _global_forward_hooks.values(),
>
> ~/work/source/repos/OpenChem/openchem/models/Smiles2Label.py in
forward(self, inp, eval)
> 41 else:
> 42 self.train()
> ---> 43 embedded = self.Embedding(inp)
> 44 output, _ = self.Encoder(embedded)
> 45 output = self.MLP(output)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in
_call_impl(self, *input, **kwargs)
> 720 result = self._slow_forward(*input, **kwargs)
> 721 else:
> --> 722 result = self.forward(*input, **kwargs)
> 723 for hook in itertools.chain(
> 724 _global_forward_hooks.values(),
>
>
~/work/source/repos/OpenChem/openchem/modules/embeddings/basic_embedding.py
in forward(self, inp)
> 7
> 8 def forward(self, inp):
> ----> 9 embedded = self.embedding(inp)
> 10 return embedded
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in
_call_impl(self, *input, **kwargs)
> 720 result = self._slow_forward(*input, **kwargs)
> 721 else:
> --> 722 result = self.forward(*input, **kwargs)
> 723 for hook in itertools.chain(
> 724 _global_forward_hooks.values(),
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in
forward(self, input)
> 124 return F.embedding(
> 125 input, self.weight, self.padding_idx, self.max_norm,
> --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
> 127
> 128 def extra_repr(self) -> str:
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in
embedding(input, weight, padding_idx, max_norm, norm_type,
scale_grad_by_freq, sparse)
> 1812 # remove once script supports set_grad_enabled
> 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
> -> 1814 return torch.embedding(weight, input, padding_idx,
scale_grad_by_freq, sparse)
> 1815
> 1816
>
> TypeError: embedding(): argument 'indices' (position 2) must be Tensor,
not list
>
>
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#38><
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=1UyQgH6hJ24o9H9c5HBPkIZTTgACDhz8kobR1mFM3yI&e=>,
or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA
><
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=cZfrAwvPGWJtxTjQXpNopDjE-6YRZWD6cyG59qU0y9U&e=
>
> .
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-23issuecomment-2D712484495&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=gRSQ-wfSd5aALN-7K1HnwMMTzxf0A6KsZEqKnW0OxEw&e=>,
or unsubscribe<
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AKBAWSFN5DXOXZWQJ4PDCADSLS7I5ANCNFSM4SWVLIVA&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=1Jpulkaf66vxDvRV9S4I6Dh3GvJgMBjbwZ5v5UzGI6U&e=
>.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#38 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYPGLOKQ2J4ASXV33PKTPDSLTIQTANCNFSM4SWVLIVA>
.
|
Right. What I wonder is that PyTorch is getting the data from OpenChem. Does that mean that OpenChem is in need of an update? Or would that be a localized thing with this notebook only?
From: Olexandr Isayev <[email protected]>
Sent: Monday, October 19, 2020 8:15 PM
To: isayev/ReLeaSE <[email protected]>
Cc: de Miranda Seabra, Gustavo <[email protected]>; Author <[email protected]>
Subject: Re: [isayev/ReLeaSE] LogP Example: "TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not list" (#38)
[External Email]
Oh makes sense. Yeah, it does not work with the latest pytorch, you still
have to run it with the old one.
On Mon, Oct 19, 2020 at 8:10 PM Gustavo Seabra <[email protected]<mailto:[email protected]>>
wrote:
Well... I'm no wiz, but that much I had figured out.
The point is that there's no such thing as "my data": I'm just running the
LogP Jupyter notebook from the git repo.
I assume it did work fine when it was created, with PyTorch 0.4. But maybe
there was some change in PyTorch internals?
--
Gustavo Seabra
________________________________
From: Olexandr Isayev ***@***.******@***.***>>
Sent: Monday, October 19, 2020 6:51:58 PM
To: isayev/ReLeaSE ***@***.******@***.***>>
Cc: de Miranda Seabra, Gustavo ***@***.******@***.***>>; Author <
***@***.******@***.***>>
Subject: Re: [isayev/ReLeaSE] LogP Example: "TypeError: embedding():
argument 'indices' (position 2) must be Tensor, not list" (#38)
[External Email]
Gustavo, I guess your data is a list, but the model expects a tensor.
On Mon, Oct 19, 2020 at 4:48 PM Gustavo Seabra ***@***.******@***.***>>
wrote:
> Hi,
>
> I'm re-running the LogP example using current version of PyTorch, and the
> execution stops in the reinforcement loop due to a TypeError, as below.
Are
> you aware of any changes in PyTorch that could be responsible for this?
Is
> there a solution for it?
>
> Thanks!
>
> for i in range(n_iterations):
> for j in trange(n_policy, desc='Policy gradient...'):
> cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
> rewards.append(simple_moving_average(rewards, cur_reward))
> rl_losses.append(simple_moving_average(rl_losses, cur_loss))
>
> plt.plot(rewards)
> plt.xlabel('Training iteration')
> plt.ylabel('Average reward')
> plt.show()
> plt.plot(rl_losses)
> plt.xlabel('Training iteration')
> plt.ylabel('Loss')
> plt.show()
>
> smiles_cur, prediction_cur = estimate_and_update(RL_logp.generator,
> my_predictor,
> n_to_generate)
> print('Sample trajectories:')
> for sm in smiles_cur[:5]:
> print(sm)
>
> with the error below:
>
> Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]./release/data.py:98:
UserWarning: To copy construct from a tensor, it is recommended to use
sourceTensor.clone().detach() or
sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
> return torch.tensor(tensor).cuda()
> Policy gradient...: 0%| | 0/15 [00:00<?, ?it/s]
>
>
---------------------------------------------------------------------------
> TypeError Traceback (most recent call last)
> <ipython-input-37-7a3a9698cf0c> in <module>
> 1 for i in range(n_iterations):
> 2 for j in trange(n_policy, desc='Policy gradient...'):
> ----> 3 cur_reward, cur_loss = RL_logp.policy_gradient(gen_data)
> 4 rewards.append(simple_moving_average(rewards, cur_reward))
> 5 rl_losses.append(simple_moving_average(rl_losses, cur_loss))
>
> ~/work/li/leadopt/generator/ReLeaSE/release/reinforcement.py in
policy_gradient(self, data, n_batch, gamma, std_smiles, grad_clipping,
**kwargs)
> 117 reward = self.get_reward(trajectory[1:-1],
> 118 self.predictor,
> --> 119 **kwargs)
> 120
> 121 # Converting string of characters into tensor
>
> <ipython-input-33-a8c049e9e937> in get_reward_logp(smiles, predictor,
invalid_reward)
> 1 def get_reward_logp(smiles, predictor, invalid_reward=0.0):
> ----> 2 mol, prop, nan_smiles = predictor.predict([smiles])
> 3 if len(nan_smiles) == 1:
> 4 return invalid_reward
> 5 if (prop[0] >= 1.0) and (prop[0] <= 4.0):
>
> ~/work/li/leadopt/generator/ReLeaSE/release/rnn_predictor.py in
predict(self, smiles, use_tqdm)
> 62 self.model[i]([torch.LongTensor(smiles_tensor).cuda(),
> 63 torch.LongTensor(length).cuda()],
> ---> 64 eval=True).detach().cpu().numpy())
> 65 prediction = np.array(prediction).reshape(len(self.model), -1)
> 66 prediction = np.min(prediction, axis=0)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in
_call_impl(self, *input, **kwargs)
> 720 result = self._slow_forward(*input, **kwargs)
> 721 else:
> --> 722 result = self.forward(*input, **kwargs)
> 723 for hook in itertools.chain(
> 724 _global_forward_hooks.values(),
>
> ~/work/source/repos/OpenChem/openchem/models/Smiles2Label.py in
forward(self, inp, eval)
> 41 else:
> 42 self.train()
> ---> 43 embedded = self.Embedding(inp)
> 44 output, _ = self.Encoder(embedded)
> 45 output = self.MLP(output)
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in
_call_impl(self, *input, **kwargs)
> 720 result = self._slow_forward(*input, **kwargs)
> 721 else:
> --> 722 result = self.forward(*input, **kwargs)
> 723 for hook in itertools.chain(
> 724 _global_forward_hooks.values(),
>
>
~/work/source/repos/OpenChem/openchem/modules/embeddings/basic_embedding.py
in forward(self, inp)
> 7
> 8 def forward(self, inp):
> ----> 9 embedded = self.embedding(inp)
> 10 return embedded
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in
_call_impl(self, *input, **kwargs)
> 720 result = self._slow_forward(*input, **kwargs)
> 721 else:
> --> 722 result = self.forward(*input, **kwargs)
> 723 for hook in itertools.chain(
> 724 _global_forward_hooks.values(),
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in
forward(self, input)
> 124 return F.embedding(
> 125 input, self.weight, self.padding_idx, self.max_norm,
> --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
> 127
> 128 def extra_repr(self) -> str:
>
> /opt/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in
embedding(input, weight, padding_idx, max_norm, norm_type,
scale_grad_by_freq, sparse)
> 1812 # remove once script supports set_grad_enabled
> 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
> -> 1814 return torch.embedding(weight, input, padding_idx,
scale_grad_by_freq, sparse)
> 1815
> 1816
>
> TypeError: embedding(): argument 'indices' (position 2) must be Tensor,
not list
>
>
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#38><<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-253E-253C&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=g8Hs519lg9jWj_p-F-FnsJI-EhmmgcXTjLGNQuv1RKo&s=cgzdklAJuafcViDRMajoP7Y9hbtc6g6fkRSblQLB5LQ&e=>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=1UyQgH6hJ24o9H9c5HBPkIZTTgACDhz8kobR1mFM3yI&e=%3E,
or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=g8Hs519lg9jWj_p-F-FnsJI-EhmmgcXTjLGNQuv1RKo&s=_Uo_UVc-W2ns8Ya-F_KvRvXQFwXqYQu9Zr2gjBKejao&e=>
><
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAYPGLOG4EKA5TULTLDHX2DSLSQZDANCNFSM4SWVLIVA-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=cZfrAwvPGWJtxTjQXpNopDjE-6YRZWD6cyG59qU0y9U&e=
>
> .
>
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-23issuecomment-2D712484495&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=gRSQ-wfSd5aALN-7K1HnwMMTzxf0A6KsZEqKnW0OxEw&e=%3E,
or unsubscribe<
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AKBAWSFN5DXOXZWQJ4PDCADSLS7I5ANCNFSM4SWVLIVA&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=qtzQIvS0xBvZarMSMLncjHFM7UrivjdozY2-OFl0ji0&s=1Jpulkaf66vxDvRV9S4I6Dh3GvJgMBjbwZ5v5UzGI6U&e=
>.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#38 (comment)><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-23issuecomment-2D712509158-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=g8Hs519lg9jWj_p-F-FnsJI-EhmmgcXTjLGNQuv1RKo&s=6SPqOUuzrRln09jU-VcZpfWl5GubhVc2FXTlcoQz21k&e=>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYPGLOKQ2J4ASXV33PKTPDSLTIQTANCNFSM4SWVLIVA><https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAYPGLOKQ2J4ASXV33PKTPDSLTIQTANCNFSM4SWVLIVA-253E&d=DwQFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=g8Hs519lg9jWj_p-F-FnsJI-EhmmgcXTjLGNQuv1RKo&s=MYsNqa3KwF_-fv9I1b7PQLa7kcI7wKbdquVhY_iRdsE&e=>
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_isayev_ReLeaSE_issues_38-23issuecomment-2D712510646&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=g8Hs519lg9jWj_p-F-FnsJI-EhmmgcXTjLGNQuv1RKo&s=HT1MLTUugX7Kn0T0bgyrpXos9S3Va9e908gYghirc70&e=>, or unsubscribe<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AKBAWSAWWKQGES6QIFEEBATSLTJB7ANCNFSM4SWVLIVA&d=DwMFaQ&c=sJ6xIWYx-zLMB3EPkvcnVg&r=FJInzorQfnL2d-jllz5Qyw&m=g8Hs519lg9jWj_p-F-FnsJI-EhmmgcXTjLGNQuv1RKo&s=Oqb5l_M5tFPV8RqaubhB7sY6e3eSxKIFlsSnKT7MXqs&e=>.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I'm re-running the LogP example using current version of PyTorch, and the execution stops in the reinforcement loop due to a TypeError, as below. Are you aware of any changes in PyTorch that could be responsible for this? Is there a solution for it?
Thanks!
with the error below:
The text was updated successfully, but these errors were encountered: