Giter Site home page Giter Site logo

pytorchconverter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorchconverter's Issues

ViewBackward not had atrribute 'old_size'

when net output is a list , list can not support out.grad_fn when run FindMultiTops(out.grad_fn),
so I try loop this list like this for making grad_fn working:
for out in outputs:
#outputs is my net output, it is a mutiple-list
for o in out:
FindMultiTops(o.grad_fn)
and then, I got this error at follow function:
def flatten(pytorch_layer):
""" Only support flatten view """
total = 1
for dim in pytorch_layer.old_size: # error is here: no attribute 'olde_size'
total *= dim
and I run pytorch 0.4.0, so can i make sure it is caused by pytorch version ?

Slice

Tanks for your code,and I think there is something not so right in The slice operation,image I have a
intput shape is 1x1x128x128,and followed but a channel num=96 ,kernel size 5 convolution,then a slice operation,then add the sliced two part together,but the slice operation I converted like:
Input data 0 1 data 0=1 1=128 2=128
Convolution ConvNd_1 1 1 data ConvNd_1 0=96 1=5 2=1 3=1 4=2 5=1 6=2400
Split splitncnn_1 1 2 ConvNd_1 splitncnn_Index_1 splitncnn_Index_2
Slice splitncnn_Index_2_slicer 1 1 splitncnn_Index_2 splitncnn_Index_2_Index_2 0=2 1=48 2=-233
Eltwise Cmax_1 2 1 splitncnn_Index_1_Index_1 splitncnn_Index_2_Index_2 Cmax_1 0=2
the splitncnn_Index_2_slicer has just one output,but it should have two,and the Cmax_1 layer should add two output of Slice,but it not.is there something wrong with slice layer?

ValueError: Unknown layer type: ThnnConv2D

Hi, I got this error when I tried the run.py

FaceBoxes
Saving default weight initialization...
Converting...
Traceback (most recent call last):
  File "run.py", line 101, in <module>
    text_net, binary_weights = ConvertModel_ncnn(pytorch_net, InputShape, softmax=False)
  File "/home/georgeokelly/convertor/PytorchConverter/code/ConvertModel.py", line 327, in ConvertModel_ncnn
    DFS(out.grad_fn)
  File "/home/georgeokelly/convertor/PytorchConverter/code/ConvertModel.py", line 85, in DFS
    child_name = DFS(u[0])
  File "/home/georgeokelly/convertor/PytorchConverter/code/ConvertModel.py", line 85, in DFS
    child_name = DFS(u[0])
  File "/home/georgeokelly/convertor/PytorchConverter/code/ConvertModel.py", line 85, in DFS
    child_name = DFS(u[0])
  [Previous line repeated 22 more times]
  File "/home/georgeokelly/convertor/PytorchConverter/code/ConvertModel.py", line 160, in DFS
    layer = convert('', layer_type_name, func)
  File "/home/georgeokelly/convertor/PytorchConverter/code/ConvertLayer_ncnn.py", line 469, in convert_ncnn
    typename, converter.keys()))
ValueError: Unknown layer type: ThnnConv2D, known types: dict_keys(['data', 'Addmm', 'Threshold', 'ConvNd', 'MaxPool2d', 'AvgPool2d', 'Add', 'Cmax', 'BatchNorm', 'Concat', 'Dro
pout', 'UpsamplingBilinear2d', 'MulConstant', 'AddConstant', 'Softmax', 'Sigmoid', 'Tanh', 'ELU', 'LeakyReLU', 'PReLU', 'Slice', 'MultiCopy', 'Negate', 'Permute', 'View'])

print 语法错误

run.py 第95行:

if model_path != '':
        try:
            pytorch_net.load_state_dict(torch.load(model_path, map_location=lambda storage, loc: storage))
        except AttributeError:
            pytorch_net = torch.load(model_path, map_location=lambda storage, loc: storage)
    else:
        NetName = str(pytorch_net.__class__.__name__)
        if not os.path.exists(ModelDir + NetName):
            os.makedirs(ModelDir + NetName)
        print 'Saving default weight initialization...'
        torch.save(pytorch_net.state_dict(), ModelDir + NetName + '/' + NetName + '.pth')

    """ Replace denormal weight values(<1e-30), otherwise may increase forward time cost """
    ReplaceDenormals(pytorch_net)

    """  Connnnnnnnvert!  """
    print('Converting...')

在python3中,不应该使用print 'str',只能用print('str')

About upsample_bilinear

The "upsample_bilinear" in UNet is changed to be "Deconvolution" in ncnn. But Deconvolution is not the meaning of upsample_bilinear. How to cope this issue? Thanks!

ValueError

ValueError: Unknown layer type: Negate, known types: ['Slice', 'MaxPool2d', 'Add', 'PReLU', 'ELU', 'MulConstant', 'Dropout', 'Addmm', 'BatchNorm', 'Concat', 'UpsamplingBilinear2d', 'Tanh', 'Softmax', 'ConvNd', 'Cmax', 'data', 'View', 'AddConstant', 'Sigmoid', 'LeakyReLU', 'AvgPool2d', 'Threshold']

convert error pytorch0.2

torch.version
'0.2.0_4'
ValueError: Unknown layer type: Negate, known types: ['Slice', 'MaxPool2d', 'Add', 'PReLU', 'ELU', 'MulConstant', 'Dropout', 'Addmm', 'BatchNorm', 'Concat', 'UpsamplingBilinear2d', 'Tanh', 'Softmax', 'ConvNd', 'Cmax', 'data', 'View', 'AddConstant', 'Sigmoid', 'LeakyReLU', 'AvgPool2d', 'Threshold']

when convert pytorch Faceboxes-->caffe

convert pytorch to ncnn error..

hello, when i convert pytorch to ncnn, i got this error :

File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 85, in DFS
child_name = DFS(u[0])
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 85, in DFS
child_name = DFS(u[0])
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 85, in DFS
child_name = DFS(u[0])
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 85, in DFS
child_name = DFS(u[0])
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 85, in DFS
child_name = DFS(u[0])
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 85, in DFS
child_name = DFS(u[0])
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertModel.py", line 160, in DFS
layer = convert('', layer_type_name, func)
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertLayer_ncnn.py", line 470, in convert_ncnn
return convertertypename
File "/usr/pytorch2ncnn/PytorchConverter-master/code/ConvertLayer_ncnn.py", line 114, in flatten
assert ((pytorch_layer.new_sizes[1] == total) or (pytorch_layer.new_sizes[1] == -1))

how should i deal with it?

不明白如何使用。

一个二进制模型可以直接pytorch 加载,然后不用手动写参数,自己可以转 caffe 模型?

ValueError: Unknown layer type: Scatter

Thanks for your code!
but when converting pytorch model to caffe ,it has this error:
ValueError: Unknown layer type: Scatter,is this a broadcast operation?

Dropout & DropoutV2 Layer

In my net written in Pytorch, I use a Dropout Layer with 0.25 input probability, so PytorchConverter translate it into a dropout layer in Ncnn with label "DropoutV2". After that I test my net in Ncnn, and it died in this "DropoutV2" Layer.
To ensure the reason, I change the probability in the source codes in Pytorch, that I change probability of the Dropout Layer into 0.5, so that PytorchConverter can translate it into a dropout layer in Ncnn with label "Dropout". After that I test my net in Ncnn, and it went well.
I report this problem to Ncnn's author, and she advised me to open a issue on PytorchConverter.
Hope you can tell me how to deal with the problem.

Converted Caffe model causes exception when loading

Hi, @starimeL . I use this converter to convert my model of trained MTCNN, from pytorch to caffe.
I also convert it to NCNN to test if the error comes from my network or binary file, but it runs well in NCNN. But I convert it to caffe *.prototxt and *.caffemodel. It causes exception when running:

	pnet_ = new caffe::Net<float>(".\\models\\PNet.prototxt", caffe::TEST);
	pnet_->CopyTrainedLayersFrom(".\\models\\PNet.caffemodel");

The error log:

Cannot copy param 0 weights from layer 'PReLU_1'; shape mismatch.  Source param shape is 1 (1); target param shape is 10 (10). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.
*** Check failure stack trace: ***

The network:

PNet (
    (pre_layer): Sequential (
      (0): Conv2d(3, 10, kernel_size=(3, 3), stride=(1, 1))
      (1): PReLU (1)
      (2): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))
      (3): Conv2d(10, 16, kernel_size=(3, 3), stride=(1, 1))
      (4): PReLU (1)
      (5): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1))
      (6): PReLU (1)
    )
    (conv4_1): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1))
    (conv4_2): Conv2d(32, 4, kernel_size=(1, 1), stride=(1, 1))
    (conv4_3): Conv2d(32, 10, kernel_size=(1, 1), stride=(1, 1))
  )
)

Thank you.

How transfer the instanceNorm2d to ncnn MVN

It seems that this convert tool will only convert the base operator of pytorch to other framework, for example, the instanceNorm is composed of a view and batchnorm, so this tool will transfer the view and batchnorm rather than the instanceNorm as a single layer and convert it to the MVN layer in ncnn. So any solutions to solve this problem? thanks a lot.

KeyError: 'unexpected key "epoch" in state_dict'

Hi, Thank you very much for your great job. When I tried to run the script to convert a model from Pytorch to Caffe, I faced this error :

(pytorch_env) hossein@hossein:/media/hossein/tmpstore/PytorchConverter-master/code$ python run.py 

Traceback (most recent call last):
  File "run.py", line 92, in <module>
    pytorch_net.load_state_dict(torch.load(model_path, map_location=lambda storage, loc: storage))
  File "/media/hossein/tmpstore/PytorchConverter-master/pytorch_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 355, in load_state_dict
    .format(name))
KeyError: 'unexpected key "epoch" in state_dict'

How do I fix it?
Thank you very much in advance

TypeError: expected bytes, str found

Hi, I want know which is your protobuf version? and I use python3.5 and protobuf 3.0.0 but meet error like:

text_net, binary_weights = ConvertModel_caffe(pytorch_net, InputShape, softmax=False)
File "/home/ysdu/PytorchConverter/code/ConvertModel.py", line 318, in ConvertModel_caffe
import caffe_pb2 as pb2
File "/home/ysdu/PytorchConverter/code/caffe_pb2.py", line 17, in
serialized_pb='\n\x0b\x63\x61\x66\x66\x65.proto\x12\x05\x63\x61\x66\x66\x65"\x1c\n\tBlobShape
.........
.........
File "/home/xxx/.local/lib/python3.5/site-packages/google/protobuf/descriptor.py", line 827, in new
return _message.default_pool.AddSerializedFile(serialized_pb)
TypeError: expected bytes, str found

Thanks.

ValueError: Unknown layer type: NoneType, known types: dict_keys(['UpsamplingBilinear2d', 'ELU', 'Addmm', 'MaxPool2d', 'Softmax', 'AvgPool2d', 'Dropout', 'MultiCopy', 'MulConstant', 'Tanh', 'Negate', 'PReLU', 'AddConstant', 'Permute', 'BatchNorm', 'Sigmoid', 'Add', 'Slice', 'Concat', 'ConvNd', 'Cmax', 'View', 'data', 'Threshold', 'LeakyReLU'])

line 469, in convert_ncnn
typename, converter.keys()))
ValueError: Unknown layer type: NoneType, known types: dict_keys(['UpsamplingBilinear2d', 'ELU', 'Addmm', 'MaxPool2d', 'Softmax', 'AvgPool2d', 'Dropout', 'MultiCopy', 'MulConstant', 'Tanh', 'Negate', 'PReLU', 'AddConstant', 'Permute', 'BatchNorm', 'Sigmoid', 'Add', 'Slice', 'Concat', 'ConvNd', 'Cmax', 'View', 'data', 'Threshold', 'LeakyReLU'])

Cycle-GAN model convert to caffemodel

I want to convert to my trained horse2zebra cycle-gan model to caffemodel with your code like this:
InputShape = [1, 3, 256, 256]
for i, data in enumerate(dataset):
if i >= 1:
break
CycleGan_model.set_input(data)
# model.real_A = Variable(model.input_A)
# model.fake_B = model.netG(model.real_A)
CycleGan_model.test()
m = CycleGan_model.netG
print('Converting...')
if dst == 'caffe':
text_net, binary_weights = ConvertModel_caffe(m, InputShape, softmax=False)

There is an error
File "/home/mi/Project/PyTorch/PyTorch2Caffe_01_09/PytorchConverter/code/ConvertModel.py", line 321, in ConvertModel_caffe
DFS(out.grad_fn)
File "/home/mi/Project/PyTorch/PyTorch2Caffe_01_09/PytorchConverter/code/ConvertModel.py", line 83, in DFS

ValueError: Unknown layer type: ReflectionPad2d, known types: ['LeakyReLU', 'Index', 'BatchNorm', 'AddConstant', 'UpsamplingBilinear2d', 'Tanh', 'MulConstant', 'Dropout', 'PReLU', 'Add', 'MaxPool2d', 'AvgPool2d', 'Addmm', 'Softmax', 'Threshold', 'ConvNd', 'Cmax', 'data', 'Concat', 'ELU']

how to modify and add the parameter ReflectionPad2d? thank you.

ValueError: Unknown layer type: Negate

Thanks for your code!
When I converting pytorch model to ncnn, it has two errors:
(1) ValueError: Unknown layer type: Negate
(2) ValueError: Unknown layer type: Permute
Could you help me to support Negate and Permute please?

Unknown layer type: Expand

Thanks for your nice conversion tools. However, I encountered such errors when I tried to convert my models:

Traceback (most recent call last):
  File "convert_my_net.py", line 51, in <module>
    text_net, binary_weights = ConvertModel_caffe(pytorch_net, InputShape, softmax=False)
  File "/data3/zjxu/CODE/pytorch-project/mobile_classification/PytorchConverter/code/ConvertModel.py", line 321, in ConvertModel_caffe
    DFS(out.grad_fn)
  File "/data3/zjxu/CODE/pytorch-project/mobile_classification/PytorchConverter/code/ConvertModel.py", line 83, in DFS
    child_name = DFS(u[0])
  File "/data3/zjxu/CODE/pytorch-project/mobile_classification/PytorchConverter/code/ConvertModel.py", line 83, in DFS
    child_name = DFS(u[0])
  File "/data3/zjxu/CODE/pytorch-project/mobile_classification/PytorchConverter/code/ConvertModel.py", line 161, in DFS
    layer = convert('', layer_type_name, func)
  File "/data3/zjxu/CODE/pytorch-project/mobile_classification/PytorchConverter/code/ConvertLayer_caffe.py", line 385, in convert_caffe
    typename, converter.keys()))
ValueError: Unknown layer type: Expand, known types: ['LeakyReLU', 'Index', 'BatchNorm', 'AddConstant', 'UpsamplingBilinear2d', 'Tanh', 'MulConstant', 'Dropout', 'PReLU', 'Add', 'MaxPool2d', 'AvgPool2d', 'Addmm', 'Softmax', 'View', 'Threshold', 'ConvNd', 'Cmax', 'data', 'Concat', 'ELU']

I'm a newbie of pytorch and don't know what does 'Expand' stands for. Could you help me solve such problem? Thanks in advance.

Shared weights transfer

Hi @starimeL Thanks to the excellent work! I have a question about the shared weights in PyTorch. For example, RetinaNet has shared classification weights and regression weights which process the 5 output branches of FPN. The shared weights are replicated 5 times after saved as a caffemodel, which causes the much bigger size of the model file. Is there a way to prohibit the replication?I also wonder if the shared weights are copied during forward in PyTorch, even if they are only saved one time in the model file.
Wish for your reply!

multi data input ?

Hi, Thanks for your codes !
I have a pytorch model needed two inputs, how can I success to converter my model to ncnn ? Do you have time to update the TODO(TODO: multi data input) in ConvertModel.py ? Thanks very much !

loss in the converting

This converter is maybe the most helpful pytorch2caffe model converter,but is there anyone tested the convert loss before converting or after converting,in my experience a few month ago,they are not exactly the same.

caffe model result is not equal pytorch model result

For a same picture When using a 8 classes ResNet18 model, caffe model outputs[-0.54072326 3.41359568 8.56874561 -0.82260311 -2.66256571 -2.29306793 -2.2721777 -3.39299512], pytorch model outputs [-2.31570864 3.20797634 9.79750538 -0.47102225 -2.78121352 -1.77155793 -1.88433588 -3.78068781].

Failed when forwarding the model of resnet50 in NCNN

Hi All,
I'm now confused with a problem of forwarding the model in NCNN.
The model is resnet50 trained under PyTorch1.0. I delete all the keys with "num_batches_tracked"of the state_dict created by the code:torch.load(model_path).
It succeeded in converting the model, but failed when run:

ncnn::Mat feature;
ex.extract("Addmm_1", feature);

in my C++ codes.
exis a ncnn::Extractor, which can successfully extract all the layers' outputs except the last layer:Addmm_1.
Addmm_1 is converted from:

self.fc = nn.Linear(512 * block.expansion, feature_dimension)

in my Python codes.

Anyone have ever suffered the same problem?

No Convolution0

Unknown layer type: Convolution0, known types: dict_keys(['data', 'Addmm', 'Threshold', 'ConvNd', 'MaxPool2d', 'AvgPool2d', 'Add', 'Cmax', 'BatchNorm', 'Concat', 'Dropout', 'UpsamplingBilinear2d', 'MulConstant', 'AddConstant', 'Softmax', 'Sigmoid', 'Tanh', 'ELU', 'LeakyReLU', 'PReLU', 'Slice', 'MultiCopy', 'Negate', 'Permute', 'View'])

Is there any op named "Concat" in pytorch?

I found an op named "Concat" in this project, but it seems pytorch do not have torch.concat layer.

btw, I'm trying to convert a pytorch model to ncnn, and there is a torch.cat layer in my model, which is not supported. Can I transform it to concat?

SSD detection model

Hello. Thank you for providing the ncnn converter tools.

Could you provide the code and/or the base pytorch model you used to parse SSD detection? SSD support is listed in the readme and in one of the past commit messages but there is no ModelFiles for it.

If you're too busy, please just provide me the model you used as base and so I can adapt and make a PR afterwards.

Thanks

ValueError: Unknown layer type: Copys

Hello,
When I try to convert CFEnet from pytorch to caffe ,it occurs "ValueError: Unknown layer type: Copys".
I want to know how to deal with it.
Thank you~

How to convert pytorch ssd model to ncnn model?

I have trained a ssd detector based on pytorch framewark, now I want to covert the py-model to ncnn model,can u show more details? for example , whether i should fix the parameters in MobileNet.py?
thx a lot~

convert failed with pytorch 0.2.0

there are some problems when i convert pytorch mobilefacenet to ncnn model, the pytorch version is just the 0.2.0, can anybody here give me some suggestions?

Traceback (most recent call last):
File "/media/roy/data/MyDocuments/src/tools/PytorchConverter/code/run.py", line 93, in
pytorch_net.load_state_dict(torch.load(model_path, map_location=lambda storage, loc: storage))
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MobileFaceNet:
Missing key(s) in state_dict: "conv1.conv.weight", "conv1.bn.running_var", "conv1.bn.bias", "conv1.bn.weight", "conv1.bn.running_mean", "conv1.prelu.weight", "conv2_dw.conv.weight", "conv2_dw.bn.running_var", "conv2_dw.bn.bias", "conv2_dw.bn.weight", "conv2_dw.bn.running_mean", "conv2_dw.prelu.weight", "conv_23.conv.conv.weight", "conv_23.conv.bn.running_var", "conv_23.conv.bn.bias", "conv_23.conv.bn.weight", "conv_23.conv.bn.running_mean", "conv_23.conv.prelu.weight", "conv_23.conv_dw.conv.weight", "conv_23.conv_dw.bn.running_var", "conv_23.conv_dw.bn.bias", "conv_23.conv_dw.bn.weight", "conv_23.conv_dw.bn.running_mean", "conv_23.conv_dw.prelu.weight", "conv_23.project.conv.weight", "conv_23.project.bn.running_var", "conv_23.project.bn.bias", "conv_23.project.bn.weight", "conv_23.project.bn.running_mean", "conv_3.model.0.conv.conv.weight", "conv_3.model.0.conv.bn.running_var", "conv_3.model.0.conv.bn.bias", "conv_3.model.0.conv.bn.weight", "conv_3.model.0.conv.bn.running_mean", "conv_3.model.0.conv.prelu.weight", "conv_3.model.0.conv_dw.conv.weight", "conv_3.model.0.conv_dw.bn.running_var", "conv_3.model.0.conv_dw.bn.bias", "conv_3.model.0.conv_dw.bn.weight", "conv_3.model.0.conv_dw.bn.running_mean", "conv_3.model.0.conv_dw.prelu.weight", "conv_3.model.0.project.conv.weight", "conv_3.model.0.project.bn.running_var", "conv_3.model.0.project.bn.bias", "conv_3.model.0.project.bn.weight", "conv_3.model.0.project.bn.running_mean", "conv_3.model.1.conv.conv.weight", "conv_3.model.1.conv.bn.running_var", "conv_3.model.1.conv.bn.bias", "conv_3.model.1.conv.bn.weight", "conv_3.model.1.conv.bn.running_mean", "conv_3.model.1.conv.prelu.weight", "conv_3.model.1.conv_dw.conv.weight", "conv_3.model.1.conv_dw.bn.running_var", "conv_3.model.1.conv_dw.bn.bias", "conv_3.model.1.conv_dw.bn.weight", "conv_3.model.1.conv_dw.bn.running_mean", "conv_3.model.1.conv_dw.prelu.weight", "conv_3.model.1.project.conv.weight", "conv_3.model.1.project.bn.running_var", "conv_3.model.1.project.bn.bias", "conv_3.model.1.project.bn.weight", "conv_3.model.1.project.bn.running_mean", "conv_3.model.2.conv.conv.weight", "conv_3.model.2.conv.bn.running_var", "conv_3.model.2.conv.bn.bias", "conv_3.model.2.conv.bn.weight", "conv_3.model.2.conv.bn.running_mean", "conv_3.model.2.conv.prelu.weight", "conv_3.model.2.conv_dw.conv.weight", "conv_3.model.2.conv_dw.bn.running_var", "conv_3.model.2.conv_dw.bn.bias", "conv_3.model.2.conv_dw.bn.weight", "conv_3.model.2.conv_dw.bn.running_mean", "conv_3.model.2.conv_dw.prelu.weight", "conv_3.model.2.project.conv.weight", "conv_3.model.2.project.bn.running_var", "conv_3.model.2.project.bn.bias", "conv_3.model.2.project.bn.weight", "conv_3.model.2.project.bn.running_mean", "conv_3.model.3.conv.conv.weight", "conv_3.model.3.conv.bn.running_var", "conv_3.model.3.conv.bn.bias", "conv_3.model.3.conv.bn.weight", "conv_3.model.3.conv.bn.running_mean", "conv_3.model.3.conv.prelu.weight", "conv_3.model.3.conv_dw.conv.weight", "conv_3.model.3.conv_dw.bn.running_var", "conv_3.model.3.conv_dw.bn.bias", "conv_3.model.3.conv_dw.bn.weight", "conv_3.model.3.conv_dw.bn.running_mean", "conv_3.model.3.conv_dw.prelu.weight", "conv_3.model.3.project.conv.weight", "conv_3.model.3.project.bn.running_var", "conv_3.model.3.project.bn.bias", "conv_3.model.3.project.bn.weight", "conv_3.model.3.project.bn.running_mean", "conv_34.conv.conv.weight", "conv_34.conv.bn.running_var", "conv_34.conv.bn.bias", "conv_34.conv.bn.weight", "conv_34.conv.bn.running_mean", "conv_34.conv.prelu.weight", "conv_34.conv_dw.conv.weight", "conv_34.conv_dw.bn.running_var", "conv_34.conv_dw.bn.bias", "conv_34.conv_dw.bn.weight", "conv_34.conv_dw.bn.running_mean", "conv_34.conv_dw.prelu.weight", "conv_34.project.conv.weight", "conv_34.project.bn.running_var", "conv_34.project.bn.bias", "conv_34.project.bn.weight", "conv_34.project.bn.running_mean", "conv_4.model.0.conv.conv.weight", "conv_4.model.0.conv.bn.running_var", "conv_4.model.0.conv.bn.bias", "conv_4.model.0.conv.bn.weight", "conv_4.model.0.conv.bn.running_mean", "conv_4.model.0.conv.prelu.weight", "conv_4.model.0.conv_dw.conv.weight", "conv_4.model.0.conv_dw.bn.running_var", "conv_4.model.0.conv_dw.bn.bias", "conv_4.model.0.conv_dw.bn.weight", "conv_4.model.0.conv_dw.bn.running_mean", "conv_4.model.0.conv_dw.prelu.weight", "conv_4.model.0.project.conv.weight", "conv_4.model.0.project.bn.running_var", "conv_4.model.0.project.bn.bias", "conv_4.model.0.project.bn.weight", "conv_4.model.0.project.bn.running_mean", "conv_4.model.1.conv.conv.weight", "conv_4.model.1.conv.bn.running_var", "conv_4.model.1.conv.bn.bias", "conv_4.model.1.conv.bn.weight", "conv_4.model.1.conv.bn.running_mean", "conv_4.model.1.conv.prelu.weight", "conv_4.model.1.conv_dw.conv.weight", "conv_4.model.1.conv_dw.bn.running_var", "conv_4.model.1.conv_dw.bn.bias", "conv_4.model.1.conv_dw.bn.weight", "conv_4.model.1.conv_dw.bn.running_mean", "conv_4.model.1.conv_dw.prelu.weight", "conv_4.model.1.project.conv.weight", "conv_4.model.1.project.bn.running_var", "conv_4.model.1.project.bn.bias", "conv_4.model.1.project.bn.weight", "conv_4.model.1.project.bn.running_mean", "conv_4.model.2.conv.conv.weight", "conv_4.model.2.conv.bn.running_var", "conv_4.model.2.conv.bn.bias", "conv_4.model.2.conv.bn.weight", "conv_4.model.2.conv.bn.running_mean", "conv_4.model.2.conv.prelu.weight", "conv_4.model.2.conv_dw.conv.weight", "conv_4.model.2.conv_dw.bn.running_var", "conv_4.model.2.conv_dw.bn.bias", "conv_4.model.2.conv_dw.bn.weight", "conv_4.model.2.conv_dw.bn.running_mean", "conv_4.model.2.conv_dw.prelu.weight", "conv_4.model.2.project.conv.weight", "conv_4.model.2.project.bn.running_var", "conv_4.model.2.project.bn.bias", "conv_4.model.2.project.bn.weight", "conv_4.model.2.project.bn.running_mean", "conv_4.model.3.conv.conv.weight", "conv_4.model.3.conv.bn.running_var", "conv_4.model.3.conv.bn.bias", "conv_4.model.3.conv.bn.weight", "conv_4.model.3.conv.bn.running_mean", "conv_4.model.3.conv.prelu.weight", "conv_4.model.3.conv_dw.conv.weight", "conv_4.model.3.conv_dw.bn.running_var", "conv_4.model.3.conv_dw.bn.bias", "conv_4.model.3.conv_dw.bn.weight", "conv_4.model.3.conv_dw.bn.running_mean", "conv_4.model.3.conv_dw.prelu.weight", "conv_4.model.3.project.conv.weight", "conv_4.model.3.project.bn.running_var", "conv_4.model.3.project.bn.bias", "conv_4.model.3.project.bn.weight", "conv_4.model.3.project.bn.running_mean", "conv_4.model.4.conv.conv.weight", "conv_4.model.4.conv.bn.running_var", "conv_4.model.4.conv.bn.bias", "conv_4.model.4.conv.bn.weight", "conv_4.model.4.conv.bn.running_mean", "conv_4.model.4.conv.prelu.weight", "conv_4.model.4.conv_dw.conv.weight", "conv_4.model.4.conv_dw.bn.running_var", "conv_4.model.4.conv_dw.bn.bias", "conv_4.model.4.conv_dw.bn.weight", "conv_4.model.4.conv_dw.bn.running_mean", "conv_4.model.4.conv_dw.prelu.weight", "conv_4.model.4.project.conv.weight", "conv_4.model.4.project.bn.running_var", "conv_4.model.4.project.bn.bias", "conv_4.model.4.project.bn.weight", "conv_4.model.4.project.bn.running_mean", "conv_4.model.5.conv.conv.weight", "conv_4.model.5.conv.bn.running_var", "conv_4.model.5.conv.bn.bias", "conv_4.model.5.conv.bn.weight", "conv_4.model.5.conv.bn.running_mean", "conv_4.model.5.conv.prelu.weight", "conv_4.model.5.conv_dw.conv.weight", "conv_4.model.5.conv_dw.bn.running_var", "conv_4.model.5.conv_dw.bn.bias", "conv_4.model.5.conv_dw.bn.weight", "conv_4.model.5.conv_dw.bn.running_mean", "conv_4.model.5.conv_dw.prelu.weight", "conv_4.model.5.project.conv.weight", "conv_4.model.5.project.bn.running_var", "conv_4.model.5.project.bn.bias", "conv_4.model.5.project.bn.weight", "conv_4.model.5.project.bn.running_mean", "conv_45.conv.conv.weight", "conv_45.conv.bn.running_var", "conv_45.conv.bn.bias", "conv_45.conv.bn.weight", "conv_45.conv.bn.running_mean", "conv_45.conv.prelu.weight", "conv_45.conv_dw.conv.weight", "conv_45.conv_dw.bn.running_var", "conv_45.conv_dw.bn.bias", "conv_45.conv_dw.bn.weight", "conv_45.conv_dw.bn.running_mean", "conv_45.conv_dw.prelu.weight", "conv_45.project.conv.weight", "conv_45.project.bn.running_var", "conv_45.project.bn.bias", "conv_45.project.bn.weight", "conv_45.project.bn.running_mean", "conv_5.model.0.conv.conv.weight", "conv_5.model.0.conv.bn.running_var", "conv_5.model.0.conv.bn.bias", "conv_5.model.0.conv.bn.weight", "conv_5.model.0.conv.bn.running_mean", "conv_5.model.0.conv.prelu.weight", "conv_5.model.0.conv_dw.conv.weight", "conv_5.model.0.conv_dw.bn.running_var", "conv_5.model.0.conv_dw.bn.bias", "conv_5.model.0.conv_dw.bn.weight", "conv_5.model.0.conv_dw.bn.running_mean", "conv_5.model.0.conv_dw.prelu.weight", "conv_5.model.0.project.conv.weight", "conv_5.model.0.project.bn.running_var", "conv_5.model.0.project.bn.bias", "conv_5.model.0.project.bn.weight", "conv_5.model.0.project.bn.running_mean", "conv_5.model.1.conv.conv.weight", "conv_5.model.1.conv.bn.running_var", "conv_5.model.1.conv.bn.bias", "conv_5.model.1.conv.bn.weight", "conv_5.model.1.conv.bn.running_mean", "conv_5.model.1.conv.prelu.weight", "conv_5.model.1.conv_dw.conv.weight", "conv_5.model.1.conv_dw.bn.running_var", "conv_5.model.1.conv_dw.bn.bias", "conv_5.model.1.conv_dw.bn.weight", "conv_5.model.1.conv_dw.bn.running_mean", "conv_5.model.1.conv_dw.prelu.weight", "conv_5.model.1.project.conv.weight", "conv_5.model.1.project.bn.running_var", "conv_5.model.1.project.bn.bias", "conv_5.model.1.project.bn.weight", "conv_5.model.1.project.bn.running_mean", "conv_6_sep.conv.weight", "conv_6_sep.bn.running_var", "conv_6_sep.bn.bias", "conv_6_sep.bn.weight", "conv_6_sep.bn.running_mean", "conv_6_sep.prelu.weight", "conv_6_dw.conv.weight", "conv_6_dw.bn.running_var", "conv_6_dw.bn.bias", "conv_6_dw.bn.weight", "conv_6_dw.bn.running_mean", "linear.weight", "bn.running_var", "bn.bias", "bn.weight", "bn.running_mean".
Unexpected key(s) in state_dict: "module.conv1.conv.weight", "module.conv1.bn.weight", "module.conv1.bn.bias", "module.conv1.bn.running_mean", "module.conv1.bn.running_var", "module.conv1.bn.num_batches_tracked", "module.conv1.prelu.weight", "module.conv2_dw.conv.weight", "module.conv2_dw.bn.weight", "module.conv2_dw.bn.bias", "module.conv2_dw.bn.running_mean", "module.conv2_dw.bn.running_var", "module.conv2_dw.bn.num_batches_tracked", "module.conv2_dw.prelu.weight", "module.conv_23.conv.conv.weight", "module.conv_23.conv.bn.weight", "module.conv_23.conv.bn.bias", "module.conv_23.conv.bn.running_mean", "module.conv_23.conv.bn.running_var", "module.conv_23.conv.bn.num_batches_tracked", "module.conv_23.conv.prelu.weight", "module.conv_23.conv_dw.conv.weight", "module.conv_23.conv_dw.bn.weight", "module.conv_23.conv_dw.bn.bias", "module.conv_23.conv_dw.bn.running_mean", "module.conv_23.conv_dw.bn.running_var", "module.conv_23.conv_dw.bn.num_batches_tracked", "module.conv_23.conv_dw.prelu.weight", "module.conv_23.project.conv.weight", "module.conv_23.project.bn.weight", "module.conv_23.project.bn.bias", "module.conv_23.project.bn.running_mean", "module.conv_23.project.bn.running_var", "module.conv_23.project.bn.num_batches_tracked", "module.conv_3.model.0.conv.conv.weight", "module.conv_3.model.0.conv.bn.weight", "module.conv_3.model.0.conv.bn.bias", "module.conv_3.model.0.conv.bn.running_mean", "module.conv_3.model.0.conv.bn.running_var", "module.conv_3.model.0.conv.bn.num_batches_tracked", "module.conv_3.model.0.conv.prelu.weight", "module.conv_3.model.0.conv_dw.conv.weight", "module.conv_3.model.0.conv_dw.bn.weight", "module.conv_3.model.0.conv_dw.bn.bias", "module.conv_3.model.0.conv_dw.bn.running_mean", "module.conv_3.model.0.conv_dw.bn.running_var", "module.conv_3.model.0.conv_dw.bn.num_batches_tracked", "module.conv_3.model.0.conv_dw.prelu.weight", "module.conv_3.model.0.project.conv.weight", "module.conv_3.model.0.project.bn.weight", "module.conv_3.model.0.project.bn.bias", "module.conv_3.model.0.project.bn.running_mean", "module.conv_3.model.0.project.bn.running_var", "module.conv_3.model.0.project.bn.num_batches_tracked", "module.conv_3.model.1.conv.conv.weight", "module.conv_3.model.1.conv.bn.weight", "module.conv_3.model.1.conv.bn.bias", "module.conv_3.model.1.conv.bn.running_mean", "module.conv_3.model.1.conv.bn.running_var", "module.conv_3.model.1.conv.bn.num_batches_tracked", "module.conv_3.model.1.conv.prelu.weight", "module.conv_3.model.1.conv_dw.conv.weight", "module.conv_3.model.1.conv_dw.bn.weight", "module.conv_3.model.1.conv_dw.bn.bias", "module.conv_3.model.1.conv_dw.bn.running_mean", "module.conv_3.model.1.conv_dw.bn.running_var", "module.conv_3.model.1.conv_dw.bn.num_batches_tracked", "module.conv_3.model.1.conv_dw.prelu.weight", "module.conv_3.model.1.project.conv.weight", "module.conv_3.model.1.project.bn.weight", "module.conv_3.model.1.project.bn.bias", "module.conv_3.model.1.project.bn.running_mean", "module.conv_3.model.1.project.bn.running_var", "module.conv_3.model.1.project.bn.num_batches_tracked", "module.conv_3.model.2.conv.conv.weight", "module.conv_3.model.2.conv.bn.weight", "module.conv_3.model.2.conv.bn.bias", "module.conv_3.model.2.conv.bn.running_mean", "module.conv_3.model.2.conv.bn.running_var", "module.conv_3.model.2.conv.bn.num_batches_tracked", "module.conv_3.model.2.conv.prelu.weight", "module.conv_3.model.2.conv_dw.conv.weight", "module.conv_3.model.2.conv_dw.bn.weight", "module.conv_3.model.2.conv_dw.bn.bias", "module.conv_3.model.2.conv_dw.bn.running_mean", "module.conv_3.model.2.conv_dw.bn.running_var", "module.conv_3.model.2.conv_dw.bn.num_batches_tracked", "module.conv_3.model.2.conv_dw.prelu.weight", "module.conv_3.model.2.project.conv.weight", "module.conv_3.model.2.project.bn.weight", "module.conv_3.model.2.project.bn.bias", "module.conv_3.model.2.project.bn.running_mean", "module.conv_3.model.2.project.bn.running_var", "module.conv_3.model.2.project.bn.num_batches_tracked", "module.conv_3.model.3.conv.conv.weight", "module.conv_3.model.3.conv.bn.weight", "module.conv_3.model.3.conv.bn.bias", "module.conv_3.model.3.conv.bn.running_mean", "module.conv_3.model.3.conv.bn.running_var", "module.conv_3.model.3.conv.bn.num_batches_tracked", "module.conv_3.model.3.conv.prelu.weight", "module.conv_3.model.3.conv_dw.conv.weight", "module.conv_3.model.3.conv_dw.bn.weight", "module.conv_3.model.3.conv_dw.bn.bias", "module.conv_3.model.3.conv_dw.bn.running_mean", "module.conv_3.model.3.conv_dw.bn.running_var", "module.conv_3.model.3.conv_dw.bn.num_batches_tracked", "module.conv_3.model.3.conv_dw.prelu.weight", "module.conv_3.model.3.project.conv.weight", "module.conv_3.model.3.project.bn.weight", "module.conv_3.model.3.project.bn.bias", "module.conv_3.model.3.project.bn.running_mean", "module.conv_3.model.3.project.bn.running_var", "module.conv_3.model.3.project.bn.num_batches_tracked", "module.conv_34.conv.conv.weight", "module.conv_34.conv.bn.weight", "module.conv_34.conv.bn.bias", "module.conv_34.conv.bn.running_mean", "module.conv_34.conv.bn.running_var", "module.conv_34.conv.bn.num_batches_tracked", "module.conv_34.conv.prelu.weight", "module.conv_34.conv_dw.conv.weight", "module.conv_34.conv_dw.bn.weight", "module.conv_34.conv_dw.bn.bias", "module.conv_34.conv_dw.bn.running_mean", "module.conv_34.conv_dw.bn.running_var", "module.conv_34.conv_dw.bn.num_batches_tracked", "module.conv_34.conv_dw.prelu.weight", "module.conv_34.project.conv.weight", "module.conv_34.project.bn.weight", "module.conv_34.project.bn.bias", "module.conv_34.project.bn.running_mean", "module.conv_34.project.bn.running_var", "module.conv_34.project.bn.num_batches_tracked", "module.conv_4.model.0.conv.conv.weight", "module.conv_4.model.0.conv.bn.weight", "module.conv_4.model.0.conv.bn.bias", "module.conv_4.model.0.conv.bn.running_mean", "module.conv_4.model.0.conv.bn.running_var", "module.conv_4.model.0.conv.bn.num_batches_tracked", "module.conv_4.model.0.conv.prelu.weight", "module.conv_4.model.0.conv_dw.conv.weight", "module.conv_4.model.0.conv_dw.bn.weight", "module.conv_4.model.0.conv_dw.bn.bias", "module.conv_4.model.0.conv_dw.bn.running_mean", "module.conv_4.model.0.conv_dw.bn.running_var", "module.conv_4.model.0.conv_dw.bn.num_batches_tracked", "module.conv_4.model.0.conv_dw.prelu.weight", "module.conv_4.model.0.project.conv.weight", "module.conv_4.model.0.project.bn.weight", "module.conv_4.model.0.project.bn.bias", "module.conv_4.model.0.project.bn.running_mean", "module.conv_4.model.0.project.bn.running_var", "module.conv_4.model.0.project.bn.num_batches_tracked", "module.conv_4.model.1.conv.conv.weight", "module.conv_4.model.1.conv.bn.weight", "module.conv_4.model.1.conv.bn.bias", "module.conv_4.model.1.conv.bn.running_mean", "module.conv_4.model.1.conv.bn.running_var", "module.conv_4.model.1.conv.bn.num_batches_tracked", "module.conv_4.model.1.conv.prelu.weight", "module.conv_4.model.1.conv_dw.conv.weight", "module.conv_4.model.1.conv_dw.bn.weight", "module.conv_4.model.1.conv_dw.bn.bias", "module.conv_4.model.1.conv_dw.bn.running_mean", "module.conv_4.model.1.conv_dw.bn.running_var", "module.conv_4.model.1.conv_dw.bn.num_batches_tracked", "module.conv_4.model.1.conv_dw.prelu.weight", "module.conv_4.model.1.project.conv.weight", "module.conv_4.model.1.project.bn.weight", "module.conv_4.model.1.project.bn.bias", "module.conv_4.model.1.project.bn.running_mean", "module.conv_4.model.1.project.bn.running_var", "module.conv_4.model.1.project.bn.num_batches_tracked", "module.conv_4.model.2.conv.conv.weight", "module.conv_4.model.2.conv.bn.weight", "module.conv_4.model.2.conv.bn.bias", "module.conv_4.model.2.conv.bn.running_mean", "module.conv_4.model.2.conv.bn.running_var", "module.conv_4.model.2.conv.bn.num_batches_tracked", "module.conv_4.model.2.conv.prelu.weight", "module.conv_4.model.2.conv_dw.conv.weight", "module.conv_4.model.2.conv_dw.bn.weight", "module.conv_4.model.2.conv_dw.bn.bias", "module.conv_4.model.2.conv_dw.bn.running_mean", "module.conv_4.model.2.conv_dw.bn.running_var", "module.conv_4.model.2.conv_dw.bn.num_batches_tracked", "module.conv_4.model.2.conv_dw.prelu.weight", "module.conv_4.model.2.project.conv.weight", "module.conv_4.model.2.project.bn.weight", "module.conv_4.model.2.project.bn.bias", "module.conv_4.model.2.project.bn.running_mean", "module.conv_4.model.2.project.bn.running_var", "module.conv_4.model.2.project.bn.num_batches_tracked", "module.conv_4.model.3.conv.conv.weight", "module.conv_4.model.3.conv.bn.weight", "module.conv_4.model.3.conv.bn.bias", "module.conv_4.model.3.conv.bn.running_mean", "module.conv_4.model.3.conv.bn.running_var", "module.conv_4.model.3.conv.bn.num_batches_tracked", "module.conv_4.model.3.conv.prelu.weight", "module.conv_4.model.3.conv_dw.conv.weight", "module.conv_4.model.3.conv_dw.bn.weight", "module.conv_4.model.3.conv_dw.bn.bias", "module.conv_4.model.3.conv_dw.bn.running_mean", "module.conv_4.model.3.conv_dw.bn.running_var", "module.conv_4.model.3.conv_dw.bn.num_batches_tracked", "module.conv_4.model.3.conv_dw.prelu.weight", "module.conv_4.model.3.project.conv.weight", "module.conv_4.model.3.project.bn.weight", "module.conv_4.model.3.project.bn.bias", "module.conv_4.model.3.project.bn.running_mean", "module.conv_4.model.3.project.bn.running_var", "module.conv_4.model.3.project.bn.num_batches_tracked", "module.conv_4.model.4.conv.conv.weight", "module.conv_4.model.4.conv.bn.weight", "module.conv_4.model.4.conv.bn.bias", "module.conv_4.model.4.conv.bn.running_mean", "module.conv_4.model.4.conv.bn.running_var", "module.conv_4.model.4.conv.bn.num_batches_tracked", "module.conv_4.model.4.conv.prelu.weight", "module.conv_4.model.4.conv_dw.conv.weight", "module.conv_4.model.4.conv_dw.bn.weight", "module.conv_4.model.4.conv_dw.bn.bias", "module.conv_4.model.4.conv_dw.bn.running_mean", "module.conv_4.model.4.conv_dw.bn.running_var", "module.conv_4.model.4.conv_dw.bn.num_batches_tracked", "module.conv_4.model.4.conv_dw.prelu.weight", "module.conv_4.model.4.project.conv.weight", "module.conv_4.model.4.project.bn.weight", "module.conv_4.model.4.project.bn.bias", "module.conv_4.model.4.project.bn.running_mean", "module.conv_4.model.4.project.bn.running_var", "module.conv_4.model.4.project.bn.num_batches_tracked", "module.conv_4.model.5.conv.conv.weight", "module.conv_4.model.5.conv.bn.weight", "module.conv_4.model.5.conv.bn.bias", "module.conv_4.model.5.conv.bn.running_mean", "module.conv_4.model.5.conv.bn.running_var", "module.conv_4.model.5.conv.bn.num_batches_tracked", "module.conv_4.model.5.conv.prelu.weight", "module.conv_4.model.5.conv_dw.conv.weight", "module.conv_4.model.5.conv_dw.bn.weight", "module.conv_4.model.5.conv_dw.bn.bias", "module.conv_4.model.5.conv_dw.bn.running_mean", "module.conv_4.model.5.conv_dw.bn.running_var", "module.conv_4.model.5.conv_dw.bn.num_batches_tracked", "module.conv_4.model.5.conv_dw.prelu.weight", "module.conv_4.model.5.project.conv.weight", "module.conv_4.model.5.project.bn.weight", "module.conv_4.model.5.project.bn.bias", "module.conv_4.model.5.project.bn.running_mean", "module.conv_4.model.5.project.bn.running_var", "module.conv_4.model.5.project.bn.num_batches_tracked", "module.conv_45.conv.conv.weight", "module.conv_45.conv.bn.weight", "module.conv_45.conv.bn.bias", "module.conv_45.conv.bn.running_mean", "module.conv_45.conv.bn.running_var", "module.conv_45.conv.bn.num_batches_tracked", "module.conv_45.conv.prelu.weight", "module.conv_45.conv_dw.conv.weight", "module.conv_45.conv_dw.bn.weight", "module.conv_45.conv_dw.bn.bias", "module.conv_45.conv_dw.bn.running_mean", "module.conv_45.conv_dw.bn.running_var", "module.conv_45.conv_dw.bn.num_batches_tracked", "module.conv_45.conv_dw.prelu.weight", "module.conv_45.project.conv.weight", "module.conv_45.project.bn.weight", "module.conv_45.project.bn.bias", "module.conv_45.project.bn.running_mean", "module.conv_45.project.bn.running_var", "module.conv_45.project.bn.num_batches_tracked", "module.conv_5.model.0.conv.conv.weight", "module.conv_5.model.0.conv.bn.weight", "module.conv_5.model.0.conv.bn.bias", "module.conv_5.model.0.conv.bn.running_mean", "module.conv_5.model.0.conv.bn.running_var", "module.conv_5.model.0.conv.bn.num_batches_tracked", "module.conv_5.model.0.conv.prelu.weight", "module.conv_5.model.0.conv_dw.conv.weight", "module.conv_5.model.0.conv_dw.bn.weight", "module.conv_5.model.0.conv_dw.bn.bias", "module.conv_5.model.0.conv_dw.bn.running_mean", "module.conv_5.model.0.conv_dw.bn.running_var", "module.conv_5.model.0.conv_dw.bn.num_batches_tracked", "module.conv_5.model.0.conv_dw.prelu.weight", "module.conv_5.model.0.project.conv.weight", "module.conv_5.model.0.project.bn.weight", "module.conv_5.model.0.project.bn.bias", "module.conv_5.model.0.project.bn.running_mean", "module.conv_5.model.0.project.bn.running_var", "module.conv_5.model.0.project.bn.num_batches_tracked", "module.conv_5.model.1.conv.conv.weight", "module.conv_5.model.1.conv.bn.weight", "module.conv_5.model.1.conv.bn.bias", "module.conv_5.model.1.conv.bn.running_mean", "module.conv_5.model.1.conv.bn.running_var", "module.conv_5.model.1.conv.bn.num_batches_tracked", "module.conv_5.model.1.conv.prelu.weight", "module.conv_5.model.1.conv_dw.conv.weight", "module.conv_5.model.1.conv_dw.bn.weight", "module.conv_5.model.1.conv_dw.bn.bias", "module.conv_5.model.1.conv_dw.bn.running_mean", "module.conv_5.model.1.conv_dw.bn.running_var", "module.conv_5.model.1.conv_dw.bn.num_batches_tracked", "module.conv_5.model.1.conv_dw.prelu.weight", "module.conv_5.model.1.project.conv.weight", "module.conv_5.model.1.project.bn.weight", "module.conv_5.model.1.project.bn.bias", "module.conv_5.model.1.project.bn.running_mean", "module.conv_5.model.1.project.bn.running_var", "module.conv_5.model.1.project.bn.num_batches_tracked", "module.conv_6_sep.conv.weight", "module.conv_6_sep.bn.weight", "module.conv_6_sep.bn.bias", "module.conv_6_sep.bn.running_mean", "module.conv_6_sep.bn.running_var", "module.conv_6_sep.bn.num_batches_tracked", "module.conv_6_sep.prelu.weight", "module.conv_6_dw.conv.weight", "module.conv_6_dw.bn.weight", "module.conv_6_dw.bn.bias", "module.conv_6_dw.bn.running_mean", "module.conv_6_dw.bn.running_var", "module.conv_6_dw.bn.num_batches_tracked", "module.linear.weight", "module.bn.weight", "module.bn.bias", "module.bn.running_mean", "module.bn.running_var", "module.bn.num_batches_tracked".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.