Giter Site home page Giter Site logo

zhaoweicai / hwgq Goto Github PK

View Code? Open in Web Editor NEW
118.0 118.0 33.0 2.37 MB

Caffe implementation of accurate low-precision neural networks

License: Other

CMake 2.74% Makefile 0.70% C++ 80.16% Cuda 6.15% MATLAB 0.88% Python 8.87% Shell 0.43% Dockerfile 0.07%

hwgq's People

Contributors

maltanar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hwgq's Issues

Exception: Only binarized weights supported when use deploy_bw.prototxt in FINN

Hi,
I would to generate an hardware design through FINN, passing it a prototxt of any BNN.
I don't understand if the folder https://github.com/zhaoweicai/hwgq/tree/master/examples/imagenet already contains prototxts suitable for FINN (as well as caffemodels in https://github.com/zhaoweicai/hwgq#models).
As the guide of FINN says, I have tried to run:
python FINN/bin/finn --device=pynqz1 --prototxt=FINN/inputs/deploy_bw.prototxt --mode=estimate
in which deploy_bw.prototxt is the prototxt contained in https://github.com/zhaoweicai/hwgq/tree/master/examples/imagenet/alex-hwgq-3ne-clip-poly-320k
but i get the exception:
....
File "/home/user/FINN/FINN/backend/fpga/backend_fpga.py", line 64, in passConvertToFPGALayers ret += [layers_fpga.FPGABipolarConvThresholdLayer(L)] File "/home/user/FINN/FINN/backend/fpga/layers_fpga.py", line 337, in __init__ raise Exception("Only binarized weights supported") Exception: Only binarized weights supported

Can you clarify me this doubt?
Thanks,
Sara

Share the VGG-small model on CIFAR-10

Hi,

Thanks for sharing your quantization flow. Is it possible to make the model and training script for the VGG-small model on CIFAR-10 available? Appreciate your help!

The weights of the trained model seem to not be binarized

Hi,
I trained the model VGG-Net on CIFAR10.
And then I tried to print the weights of the caffe-model. But there seem to be problems with the weights.
Hope you could give me some tips on these.
Many thanks.

The 'conv2_1' looks like,

[[-2.10925471e-02 6.74787164e-03 1.83824124e-03]
[ 1.17361760e-02 4.95712832e-02 1.15355672e-02]
[ 1.40904170e-03 1.91040952e-02 2.68753860e-02]]

[[ 7.47812912e-04 9.25247837e-03 -1.92467440e-02]
[ 6.26096362e-03 6.52389135e-03 -2.91604549e-02]
[ 1.54679930e-02 1.69047303e-02 -1.29050175e-02]]

[[-1.59913283e-02 -4.31000069e-03 -9.07981582e-03]
[-2.14854758e-02 4.27068589e-04 -3.39677893e-02]
[ 1.60318725e-02 -2.15732064e-02 -2.49724630e-02]]

[[ 2.94211088e-03 3.10821529e-03 -1.46567877e-02]
[-2.81586102e-03 1.56722255e-02 -4.10768725e-02]
[ 1.45177245e-02 8.73563997e-03 -2.80519202e-02]]

[[-3.02609615e-02 -2.83134021e-02 -3.68605666e-02]
[-3.66647467e-02 -1.59114692e-02 -2.45084912e-02]
[-1.47473682e-02 -3.27019729e-02 -1.81703269e-02]]

[[-2.29146797e-02 -4.16266685e-03 -1.24716444e-03]
[ 2.29197666e-02 1.97226536e-02 -1.27944229e-02]
[ 9.49834287e-03 2.72172503e-02 -2.50766743e-02]]

[[ 1.04283448e-02 2.07192469e-02 -1.10709341e-02]
[-7.62524176e-03 -3.04542063e-03 -9.67555027e-03]
[-4.35589701e-02 -3.34056690e-02 -6.37046574e-03]]

[[-2.52210582e-03 3.50272022e-02 -1.26641279e-03]
[ 1.27567565e-02 4.36151065e-02 2.20290497e-02]
[ 2.17617508e-02 3.94680649e-02 -8.48370232e-03]]

[[-2.48368196e-02 2.40874123e-02 3.73369120e-02]
[ 2.25116219e-02 3.33309509e-02 5.79682887e-02]
[-1.95888728e-02 3.29406597e-02 3.80768776e-02]]

[[-1.85104611e-03 -1.77885883e-03 -1.04705431e-02]
[ 2.15757657e-02 5.22872880e-02 2.86879446e-02]
[ 4.90898751e-02 6.27720580e-02 4.92752111e-03]]

[[-2.06267685e-02 7.74261449e-03 -2.70165708e-02]
[ 2.36926116e-02 4.17519510e-02 -1.96355823e-02]
[ 4.65140045e-02 3.26207504e-02 -1.86813949e-03]]

[[-2.06958782e-02 -1.99438701e-03 -5.76990540e-04]
[ 1.37184141e-02 1.84343923e-02 3.58887762e-02]
[ 1.97797827e-02 1.47607196e-02 9.19601228e-03]]

[[-3.14316433e-03 1.37923444e-02 2.48924959e-02]
[ 4.98341862e-04 1.73550507e-03 2.57142968e-02]
[ 1.15122795e-02 -3.53835919e-03 2.31737401e-02]]

The deploy.prototxt I used is:
`
name: "CIFAR10"
input: "data"
input_shape {
dim: 1
dim: 3
dim: 32
dim: 32
}
layer {
name: "conv1_1"
type: "Convolution"
bottom: "data"
top: "conv1_1"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
stride: 1
bias_term: false
}
}
layer {
name: "bn1_2"
type: "BatchNorm"
bottom: "conv1_1"
top: "conv1_1"

batch_norm_param {
use_global_stats: true
}
}
layer {
name: "qt1_2"
type: "Quant"
bottom: "conv1_1"
top: "qt1_2"
quant_param {
forward_func: "hwgq"
backward_func: "relu"
centers: 0.538 centers: 1.076 centers: 1.614
clip_thr: 1.614
}
}
layer {
name: "conv1_2"
type: "Convolution"
bottom: "qt1_2"
top: "conv1_2"
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
stride: 1
bias_term: false
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1_2"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "bn2_1"
type: "BatchNorm"
bottom: "pool1"
top: "pool1"
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "qt2_1"
type: "Quant"
bottom: "pool1"
top: "qt2_1"
quant_param {
forward_func: "hwgq"
backward_func: "relu"
centers: 0.538 centers: 1.076 centers: 1.614
clip_thr: 1.614
}
}
layer {
name: "conv2_1"
type: "Convolution"
bottom: "qt2_1"
top: "conv2_1"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
stride: 1
bias_term: false
}
}
layer {
name: "bn2_2"
type: "BatchNorm"
bottom: "conv2_1"
top: "conv2_1"
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "qt2_2"
type: "Quant"
bottom: "conv2_1"
top: "qt2_2"
quant_param {
forward_func: "hwgq"
backward_func: "relu"
centers: 0.538 centers: 1.076 centers: 1.614
clip_thr: 1.614
}
}
layer {
name: "conv2_2"
type: "Convolution"
bottom: "qt2_2"
top: "conv2_2"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
stride: 1
bias_term: false
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2_2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "bn3_1"
type: "BatchNorm"
bottom: "pool2"
top: "pool2"
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "qt3_1"
type: "Quant"
bottom: "pool2"
top: "qt3_1"
quant_param {
forward_func: "hwgq"
backward_func: "relu"
centers: 0.538 centers: 1.076 centers: 1.614
clip_thr: 1.614
}
}
layer {
name: "conv3_1"
type: "Convolution"
bottom: "qt3_1"
top: "conv3_1"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
stride: 1
bias_term: false
}
}
layer {
name: "bn3_2"
type: "BatchNorm"
bottom: "conv3_1"
top: "conv3_1"
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "qt3_2"
type: "Quant"
bottom: "conv3_1"
top: "qt3_2"
quant_param {
forward_func: "hwgq"
backward_func: "relu"
centers: 0.538 centers: 1.076 centers: 1.614
clip_thr: 1.614
}
}
layer {
name: "conv3_2"
type: "Convolution"
bottom: "qt3_2"
top: "conv3_2"
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
stride: 1
bias_term: false
}
}
layer {
name: "bn3"
type: "BatchNorm"
bottom: "conv3_2"
top: "conv3_2"
batch_norm_param {
use_global_stats: true
}
}

layer {
name: "relu3"
type: "ReLU"
bottom: "conv3_2"
top: "conv3_2"
}
layer {
name: "pool3"
type: "Pooling"
bottom: "conv3_2"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}

layer {
name: "fc4"
type: "InnerProduct"
bottom: "pool3"
top: "fc4"
inner_product_param {
num_output: 10
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "fc4"
top: "prob"
}
`

HWGQ: binarized weights in your AlexNet_HWGQ_BW is not low precision(1 bit fixed point number)

Hi,

I read your paper and found something I don't understand.

1, How to set the bit number of the weight w in the code, and activation and grad bit number.

2, How to convert a trained weight file like AlexNet_HWGQ to a 1-bits or n-bit fixed-point number

3,I extracted the weights in the AlexNet_HWGQ_BW file. It does not seem to be the 1-bit fixed-point numbers mentioned in the paper. That is, w is not a low-precision weight, but a binary real value.

I am looking forward to your reply! Thank you.

image

image

How to save the binarized model?

According to my understanding, the weights of the trained model were not binarized directly.
So, do I need to binarize the trained model with parameter 'scale' and then re-save the model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.