Skip to content

Commit

Permalink
use fbgemm cpu kernel in forward quantized cpu emb (pytorch#849)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#849

For higher perf and more code reuse

Reviewed By: jianyuh

Differential Revision: D33430292

fbshipit-source-id: fd53b3ccf2bd6f6ada64aa6b5964c5f311e898f9
  • Loading branch information
jspark1105 authored and facebook-github-bot committed Jan 8, 2022
1 parent 747fc4a commit 0e5737d
Show file tree
Hide file tree
Showing 6 changed files with 168 additions and 375 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@

#include "codegen/embedding_forward_split_cpu.h"
#include "fbgemm/FbgemmEmbedding.h"
#include "fbgemm_gpu/cpu_utils.h"
#include "fbgemm_gpu/embedding_common.h"

using Tensor = at::Tensor;
Expand Down Expand Up @@ -178,9 +179,11 @@ split_embedding_backward_codegen_{{ optimizer }}_cpu(
eps,
// fbgemm follows caffe2 convention of negative learning rate
-learning_rate);
// TODO: more friendly error msg.
// See report_error_ in embedding_forward_split_cpu.cpp
TORCH_CHECK(success);

if (!success) {
fbgemm_gpu::report_embedding_error(
t, B, b_begin, b_end, offsets_data, indices_data, hash_size);
}
}
}); // parallel_for
return;
Expand Down
Loading

0 comments on commit 0e5737d

Please sign in to comment.