Giter Site home page Giter Site logo

quantized.pytorch's People

Contributors

eladhoffer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quantized.pytorch's Issues

Self.weight is not modified to qweight

Hi @eladhoffer @itayhubara,
I see that in the quantize.py file, self.weight is left unchanged and qweight is only used to compute gradients. This results in the use of full precision weights for gradient update step (optimizer.step) which acts as error feedback and hence a less accuracy drop. When I add self.weight=qweight to QConv2d class's forward function, I see an accuracy drop of 15-20% for ResNet20 on CIFAR10. Does that mean we need a copy of the full precision weights and full precision weight update step or am I missing something? Any help would be highly appreciated!

Thanks,
Aparna

crash during training

TRAINING - Epoch: [0][410/446] Time 0.602 (0.622) Data 0.000 (0.005) Loss 4.0999 (5.5282) Prec@1 2.344 (3.435) Prec@5 19.531 (14.536)
TRAINING - Epoch: [0][420/446] Time 0.602 (0.622) Data 0.000 (0.005) Loss 4.1251 (5.4952) Prec@1 3.906 (3.459) Prec@5 20.312 (14.664)
TRAINING - Epoch: [0][430/446] Time 0.611 (0.621) Data 0.000 (0.005) Loss 4.0770 (5.4635) Prec@1 3.125 (3.478) Prec@5 24.219 (14.813)
TRAINING - Epoch: [0][440/446] Time 0.600 (0.621) Data 0.000 (0.005) Loss 4.0965 (5.4331) Prec@1 7.031 (3.515) Prec@5 19.531 (14.948)
Traceback (most recent call last):
File "main.py", line 305, in
main()
File "main.py", line 187, in main
train_loader, model, criterion, epoch, optimizer)
File "main.py", line 293, in train
training=True, optimizer=optimizer)
File "main.py", line 249, in forward
output = model(inputs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/workspace/pytorch-quantization/quantized.pytorch/models/resnet_quantized.py", line 148, in forward
x = self.layer3(x)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/workspace/pytorch-quantization/quantized.pytorch/models/resnet_quantized.py", line 56, in forward
out = self.bn1(out)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/workspace/pytorch-quantization/quantized.pytorch/models/modules/quantize.py", line 272, in forward
y = y.view(C, self.num_chunks, B * H * W // self.num_chunks)
RuntimeError: invalid argument 2: size '[256 x 16 x 134]' is invalid for input with 551936 elements at ../src/TH/THStorage.cpp:40

Straight through estimator

I noticed that you don't cancel gradient of the large values, when using straight through estimator here.

In QNN paper it was claimed "Not cancelling the gradient when r is too large significantly worsens performance".

Does it only matter for low precision quantization (e.g. binary?)

Cheating when applying add operation in ResNet

Hi!
It seems to me as a cheat when preforming '+' operation at the end of residual block in quantized ResNet implementation. It requires 16 bit accumulator to get the outputs of sum, also input tensors ought to be quantized. What we get is that this op inputs are 32 bit (res input) and 16 bit (after last qconv) and the result is 32 bit, so, the accuracy doesn't fall at all.

input

if my input is torch.autograd.Variable, how to correct the code. I met the error:
File "example/mpii.py", line 352, in
main(parser.parse_args())
File "example/mpii.py", line 107, in main
train_loss, train_acc = train(train_loader, model, criterion, optimizer, args.debug, args.flip)
File "example/mpii.py", line 153, in train
output = model(input_var)
File "/home/wangmeng/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/wangmeng/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 58, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/wangmeng/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/wangmeng/pytorch-pose-quantized/pose/models/hourglass_quantized.py", line 172, in forward
x = self.conv1(x)
File "/home/wangmeng/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/wangmeng/pytorch-pose-quantized/pose/models/modules/quantize.py", line 188, in forward
qinput = self.quantize_input(input)
File "/home/wangmeng/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/wangmeng/pytorch-pose-quantized/pose/models/modules/quantize.py", line 165, in forward
min_value * (1 - self.momentum))
TypeError: add_ received an invalid combination of arguments - got (Variable), but expected one of:

  • (float value)
    didn't match because some of the arguments have invalid types: (Variable)
  • (torch.cuda.FloatTensor other)
    didn't match because some of the arguments have invalid types: (Variable)
  • (torch.cuda.sparse.FloatTensor other)
    didn't match because some of the arguments have invalid types: (Variable)
  • (float value, torch.cuda.FloatTensor other)
  • (float value, torch.cuda.sparse.FloatTensor other)

Prediction with quantized model

Hi,

I am trying to run prediction but hitting a roadblock with CUDA not supporting Byte tensor:

d, l = next(iter(train_loader))
d, l = d.type(torch.ByteTensor), l.type(torch.ByteTensor)
d, l = Variable(d.cuda()), Variable(l.cuda())
model_q(d)

Any thoughts how can I directly use a quantized model?

Bug in UniformQuantize class

Hi, thank you for posting your code!

I think there's a mismatch of the argument order between here and here.

def forward(cls, ctx, input, num_bits=8, min_value=None, max_value=None, stochastic=False, inplace=False, enforce_true_zero=False, num_chunks=None, out_half=False)
UniformQuantize().apply(  x, num_bits,   min_value,      max_value,      num_chunks,       stochastic,    inplace)

Among other potential issues, this causes stochastic arg to take the value of num_chunks, sometimes making it true, and leading to "stochastic" rounding.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.