Skip to content

Commit

Permalink
RW Dist change to support uneven sharding [1] FBGEMM changes (pytorch…
Browse files Browse the repository at this point in the history
…#2168)

Summary:
Pull Request resolved: pytorch#2168

We revert a diff due to incompatibility between frontend and backend packages in code freeze. Here we only reland the backend part, which will not pick up by production package. The frontend part would be land after the code freeze.

Reviewed By: IvanKobzarev

Differential Revision: D51496816

fbshipit-source-id: beac54e7d629e8919d1b280161dba491ed8a3431
  • Loading branch information
gnahzg authored and facebook-github-bot committed Nov 29, 2023
1 parent 886bf42 commit c6e3fa2
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion fbgemm_gpu/src/sparse_ops/sparse_ops_cpu.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,8 @@ void _block_bucketize_sparse_features_cpu(
const index_t* const block_sizes_data = block_sizes.data_ptr<index_t>();
offset_t* batch_sizes_data = nullptr;
const auto variable_batch_size = batch_size_per_feature.has_value();
const auto variable_bucket_sizes = block_bucketize_pos.has_value();
const auto variable_bucket_sizes = block_bucketize_pos.has_value() &&
block_bucketize_pos.value().size() != 0;
using uindex_t = std::make_unsigned_t<index_t>;
using uoffset_t = std::make_unsigned_t<offset_t>;
std::vector<int64_t> lower_bounds(indices.numel(), 0);
Expand Down

0 comments on commit c6e3fa2

Please sign in to comment.