Skip to content

Commit

Permalink
Remove .data call in LSTM as it is not necessary (pytorch#122733)
Browse files Browse the repository at this point in the history
Summary: Title

Test Plan: CI

Differential Revision: D55392057

Functional pre-dispatch tracing chokes on LSTM .data call today. While we need to fix it, it seems this call seems unnecessary here.

Pull Request resolved: pytorch#122733
Approved by: https://github.com/mikaylagawarecki, https://github.com/albanD
  • Loading branch information
tugsbayasgalan authored and pytorchmergebot committed Mar 27, 2024
1 parent 1d6fc0d commit 1b9c7e4
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
1 change: 0 additions & 1 deletion test/export/test_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -1584,7 +1584,6 @@ def test_buffer_util(self):
self.assertEqual(buffer[1].shape, torch.Size([100])) # running_var
self.assertEqual(buffer[2].shape, torch.Size([])) # num_batches_tracked

@testing.expectedFailureSerDerPreDispatch # tracked via: T181382045
def test_export_dynamo_config(self):
class MyModule(torch.nn.Module):
def __init__(self):
Expand Down
7 changes: 4 additions & 3 deletions torch/nn/modules/rnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,9 +183,10 @@ def flatten_parameters(self) -> None:
first_fw = self._flat_weights[0]
dtype = first_fw.dtype
for fw in self._flat_weights:
if (not isinstance(fw.data, Tensor) or not (fw.data.dtype == dtype) or
not fw.data.is_cuda or
not torch.backends.cudnn.is_acceptable(fw.data)):
if (
not isinstance(fw, Tensor) or not (fw.dtype == dtype) or
not fw.is_cuda or not torch.backends.cudnn.is_acceptable(fw)
):
return

# If any parameters alias, we fall back to the slower, copying code path. This is
Expand Down

0 comments on commit 1b9c7e4

Please sign in to comment.