Skip to content

Commit

Permalink
Fix the OSS build error for fbgemm/torchrec (pytorch#1001)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#1001

OSS failure from D34840551 (pytorch@43c0f12):

- fbgemm_gpu: https://github.com/pytorch/FBGEMM/runs/5616872090?check_suite_focus=true
- TorchRec:  https://github.com/pytorch/torchrec/runs/5604628984?check_suite_focus=true

```
/home/ec2-user/actions-runner/_work/FBGEMM/FBGEMM/fbgemm_gpu/src/jagged_tensor_ops.cu:200:6: internal compiler error: in maybe_undo_parenthesized_ref, at cp/semantics.c:1739
   JAGGED_TENSOR_DISPATCH_DIMS();
```

~~Make num_jagged_dim to be only 1 / 2 before we can figure out a way to bypass such compiler issue.~~

**The issue**: we need to pass arg by value instead of by reference for some compiler version with macro.

Reviewed By: colin2328, jasonjk-park, brad-mengchi

Differential Revision: D35028747

fbshipit-source-id: 04da934c5dbdf5eafaf9dce8a081bf7f01344b12
  • Loading branch information
jianyuh authored and facebook-github-bot committed Mar 22, 2022
1 parent 4aa2c9d commit ccf2ff2
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion fbgemm_gpu/include/fbgemm_gpu/sparse_ops_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -264,8 +264,11 @@ constexpr uint32_t cuda_calc_block_count(
}

// Used in jagged_tensor_ops.cu and jagged_tensor_ops_cpu.cpp
// Passing lambda exp argument by value instead of by reference to avoid
// "internal compiler error: in maybe_undo_parenthesized_ref" error for specific
// compiler version.
#define JAGGED_TENSOR_DISPATCH_DIMS() \
AT_DISPATCH_INDEX_TYPES(x_offsets[0].scalar_type(), "jagged_indices", [&] { \
AT_DISPATCH_INDEX_TYPES(x_offsets[0].scalar_type(), "jagged_indices", [=] { \
switch (num_jagged_dim) { \
case 1: \
INVOKE_KERNEL_WITH_DIM(1); \
Expand Down

0 comments on commit ccf2ff2

Please sign in to comment.