Skip to content

Commit

Permalink
add more comments on handling small scale handling; remove redundant …
Browse files Browse the repository at this point in the history
…if (pytorch#232)

Summary:
Pull Request resolved: pytorch#232

As title

Reviewed By: dskhudia

Differential Revision: D19231440

fbshipit-source-id: fe4db92bc6b5dd2822dbdac75f29b779d80cee65
  • Loading branch information
jspark1105 authored and facebook-github-bot committed Dec 26, 2019
1 parent 23d8703 commit bcbdec7
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
2 changes: 1 addition & 1 deletion src/PackAWithQuantRowOffset.cc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ PackAWithQuantRowOffset<T, accT>::PackAWithQuantRowOffset(
if (!cpuinfo_initialize()) {
throw std::runtime_error("Failed to initialize cpuinfo!");
}
if (scale_ == 0.0f || std::isinf(1.0f / scale_)) {
if (scale_ == 0.0f) {
throw std::runtime_error("scale cannot be zero");
}
if (std::isinf(1.0f / scale_)) {
Expand Down
4 changes: 3 additions & 1 deletion src/QuantUtils.cc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,9 @@ TensorQuantizationParams ChooseQuantizationParams(
// final number to reflect the actual number used during quantization.
float scale = (static_cast<double>(max) - min) / (qmax - qmin);
// If scale is 0 or too small so its reciprocal is infinity, we arbitrary
// adjust the scale to 0.1
// adjust the scale to 0.1 . We want to avoid scale's reciprocal being infinity
// because some of fbgemm code pre-computes scale's reciprocal to do
// multiplication instead of division in the time critical part of code.
if (scale == 0.0f || isinf(1.0f / scale)) {
scale = 0.1;
}
Expand Down

0 comments on commit bcbdec7

Please sign in to comment.