Skip to content

Latest commit

 

History

History

data

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Where to get

You can download the fixed-point CNN weights, input test vectors and output golden reference data for AlexNet/VGG-16 from the following links to run and evaluate the project.

  • Baidu's Cloud Drive:
http://pan.baidu.com/s/1jIl6qkm
  • GoogleDrive:
https://drive.google.com/open?id=0B3srpZY5rHcASEhSSmh3Tm5LclU

Fixed-Point CNN models

The weights are quantilized with 8-bit precisions, and the model quantization is performed such that there is only 1% loss in top-1/5 accuracy for all models. In quantization, we assume the fixed-point weight is presented as N * 2^-m, where N is a fixed-point integer with n-bit word length, and m denotes the fractional bits of the quantized weight. Therefore, we use the pair of integers (n,m) as the quantization parameters. In the following tables, we report the quantization parameters used in this project for different CNN models.

  • AlexNet(CaffeNet)
LayerName Input Output Weight
conv1 8,0 8,-4 8,8
relu1 8.-4 8,-4
lrn1 8,-4 8,0
pool1 8,0 8,0
conv2 8,0 8,-2 8,8
relu2 8,-2 8,-2
lrn2 8,-2 8,0
pool2 8,0 8,0
conv3 8,0 8,-1 8,8
relu3 8,-1 8,-1
conv4 8,-1 8,-1 8,8
relu4 8,-1 8,-1
conv5 8,-1 8,-1 8,8
relu5 8,-1 8,-1
pool5 8,-1 8,-1
fc6 8,-1 8,0 8,11
relu6 8,0 8,0
drop6 8,0 8,0
fc7 8,0 8,2 8,10
relu7 8,2 8,2
drop7 8,2 8,2
fc8 8,2 8,2 8,10
  • VGG-16
LayerName Input Output Weight
conv1_1 8,0 8,-2 8,7
relu1_1
conv1_2 8,-2 8,-5 8,8
relu1_2
conv2_1 8,-5 8,-5 8,8
relu2_1
conv2_2 8,-5 8,-6 8,8
relu2_2
conv3_1 8,-6 8,-7 8,7
relu3_1
conv3_2 8,-7 8,-7 8,8
relu3_2
conv3_3 8,-7 8,-7 8,8
relu3_3
conv4_1 8,-7 8,-6 8,8
relu4_1
conv4_2 8,-6 8,-5 8,8
relu4_2
conv4_3 8,-5 8,-5 8,8
relu4_3
conv5_1 8,-5 8,-4 8,9
relu5_1
conv5_2 8,-4 8,-3 8,9
relu5_2
conv5_3 8,-3 8,-2 8,8
relu5_3
fc6 8,-2 8,0 4,8
fc7 8,0 8,2 4,7
fc8 8,2 8,2 4,7

Notes:

Remember to change the corresponding paths in "main.cpp" before running the project.