forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
FixedQParamsFakeQuantize: adjust default quant_min and quant_max (pyt…
…orch#47423) Summary: Pull Request resolved: pytorch#47423 Since the dtype of this fake_quant is `quint8`, the output range should be from 0 to 255. Fixing. This should address the numerical inaccuracies with sigmoid and hardsigmoid with `FixedQParamsFakeQuantize` attached compared to their quantized counterparts. In a future PR, might be safer to also make the activation functions using `FixedQParamsFakeQuantize` to explicitly specify their expected output range and zero_point. Leaving that for later, as this bugfix should be landed urgently. Test Plan: Manual script which gives low SQNR before this PR and high SQNR after this PR: https://gist.github.com/vkuzo/9906bae29223da72b10d6b6aafadba42 pytorch#47376, which can be landed after this, adds a proper test. Imported from OSS Reviewed By: ayush29feb, jerryzh168 Differential Revision: D24751497 fbshipit-source-id: 4c32e22a30116caaceeedb4cd47146d066054a89
- Loading branch information
1 parent
745899f
commit 5977d1d
Showing
2 changed files
with
32 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters