Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/master'
Browse files Browse the repository at this point in the history
# Conflicts:
#	README.md
  • Loading branch information
jakc4103 committed Mar 5, 2020
2 parents a346f0e + 32f5245 commit feb19cf
Show file tree
Hide file tree
Showing 672 changed files with 149,777 additions and 2 deletions.
54 changes: 54 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
{
"files.associations": {
"array": "cpp",
"atomic": "cpp",
"*.tcc": "cpp",
"cctype": "cpp",
"chrono": "cpp",
"clocale": "cpp",
"cmath": "cpp",
"complex": "cpp",
"cstdarg": "cpp",
"cstddef": "cpp",
"cstdint": "cpp",
"cstdio": "cpp",
"cstdlib": "cpp",
"cstring": "cpp",
"ctime": "cpp",
"cwchar": "cpp",
"cwctype": "cpp",
"deque": "cpp",
"list": "cpp",
"unordered_map": "cpp",
"vector": "cpp",
"exception": "cpp",
"algorithm": "cpp",
"map": "cpp",
"memory": "cpp",
"memory_resource": "cpp",
"optional": "cpp",
"ratio": "cpp",
"set": "cpp",
"string": "cpp",
"string_view": "cpp",
"system_error": "cpp",
"tuple": "cpp",
"type_traits": "cpp",
"utility": "cpp",
"fstream": "cpp",
"initializer_list": "cpp",
"iomanip": "cpp",
"iosfwd": "cpp",
"iostream": "cpp",
"istream": "cpp",
"limits": "cpp",
"new": "cpp",
"ostream": "cpp",
"sstream": "cpp",
"stdexcept": "cpp",
"streambuf": "cpp",
"thread": "cpp",
"cinttypes": "cpp",
"typeinfo": "cpp"
}
}
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,8 +155,8 @@ python convert_ncnn.py --equalize --correction --quantize --relu --ncnn_build pa
Weight_quant(FP32) = Weight_quant(Int8*) = Dequant(Quant(Weight))
```

### 16-bits Quantization for Bias
Somehow I cannot make **Bias-Correction** work on 8-bits bias quantization for all scenarios (even with data dependent correction).
### 16-bits Quantization for Bias (still 8-bits for weight and activation)
Somehow I cannot make **Bias-Correction** work on 8-bits bias quantization (even with data dependent correction).
I am not sure how the original paper managed to do it with 8 bits quantization, but I guess they either use some non-uniform quantization techniques or use more bits for bias parameters as I do.

### Int8 inference
Expand Down
Binary file added ZeroQ/__pycache__/distill_data.cpython-36.pyc
Binary file not shown.
Binary file added ZeroQ/utils/__pycache__/__init__.cpython-36.pyc
Binary file not shown.
Binary file added ZeroQ/utils/__pycache__/data_utils.cpython-36.pyc
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added __pycache__/dfq.cpython-36.pyc
Binary file not shown.
Binary file added __pycache__/improve_dfq.cpython-36.pyc
Binary file not shown.
Loading

0 comments on commit feb19cf

Please sign in to comment.