Giter Site home page Giter Site logo

chuanqi305 / mobilenetv2-ssdlite Goto Github PK

View Code? Open in Web Editor NEW
445.0 29.0 232.0 691 KB

Caffe implementation of SSD and SSDLite detection on MobileNetv2, converted from tensorflow.

License: MIT License

Python 80.23% C++ 12.13% Cuda 6.69% Shell 0.96%
caffe mobilenetv2 ssd caffemodel ssdlite mobilenetv2-ssdlite mobilenet

mobilenetv2-ssdlite's Introduction

MobileNetv2-SSDLite

Caffe implementation of SSD detection on MobileNetv2, converted from tensorflow.

Prerequisites

Tensorflow and Caffe version SSD is properly installed on your computer.

Usage

  1. Firstly you should download the original model from tensorflow.
  2. Use gen_model.py to generate the train.prototxt and deploy.prototxt (or use the default prototxt).
python gen_model.py -s deploy -c 91 >deploy.prototxt
  1. Use dump_tensorflow_weights.py to dump the weights of conv layer and batchnorm layer.
  2. Use load_caffe_weights.py to load the dumped weights to deploy.caffemodel.
  3. Use the code in src to accelerate your training if you have a cudnn7, or add "engine: CAFFE" to your depthwise convolution layer to solve the memory issue.
  4. The original tensorflow model is trained on MSCOCO dataset, maybe you need deploy.caffemodel for VOC dataset, use coco2voc.py to get deploy_voc.caffemodel.

Train your own dataset

  1. Generate the trainval_lmdb and test_lmdb from your dataset.
  2. Write a labelmap.prototxt
  3. Use gen_model.py to generate some prototxt files, replace the "CLASS_NUM" with class number of your own dataset.
python gen_model.py -s train -c CLASS_NUM >train.prototxt
python gen_model.py -s test -c CLASS_NUM >test.prototxt
python gen_model.py -s deploy -c CLASS_NUM >deploy.prototxt
  1. Copy coco/solver_train.prototxt and coco/train.sh to your project and start training.

Note

There are some differences between caffe and tensorflow implementation:

  1. The padding method 'SAME' in tensorflow sometimes use the [0, 0, 1, 1] paddings, means that top=0, left=0, bottom=1, right=1 padding. In caffe, there is no parameters can be used to do that kind of padding.
  2. MobileNet on Tensorflow use ReLU6 layer y = min(max(x, 0), 6), but caffe has no ReLU6 layer. Replace ReLU6 with ReLU cause a bit accuracy drop in ssd-mobilenetv2, but very large drop in ssdlite-mobilenetv2. There is a ReLU6 layer implementation in my fork of ssd.

mobilenetv2-ssdlite's People

Contributors

chuanqi305 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobilenetv2-ssdlite's Issues

Check failed: target_blobs.size() == source_layer.blobs_size() (1 vs. 2) Incompatible number of blobs for layer Conv

I used gen_model.py to generate train.prototxt. However, when training MobileNetv2-SSDLite, this issue arised.

I1213 14:58:07.939759 21634 net.cpp:761] Ignoring source layer input
I1213 14:58:07.939764 21634 net.cpp:761] Ignoring source layer data_input_0_split
F1213 14:58:07.939779 21634 net.cpp:767] Check failed: target_blobs.size() == source_layer.blobs_size() (1 vs. 2) Incompatible number of blobs for layer Conv
*** Check failure stack trace: ***

Question about the merge_bn.py

When I use v1's merge_bn.py to merge v2's model and report the following error, does anyone know how to solve it? Thank you.

I0531 13:10:13.336994 270 net.cpp:283] Network initialization done.
Traceback (most recent call last):
File "merge_bn.py", line 63, in
merge_bn(net, net_deploy)
File "merge_bn.py", line 57, in merge_bn
nob.params[key][1].data[...] = bias
IndexError: Index out of range

mobilenetv2-ssd 0.25 training

Hi @chuanqi305, I try to train the mobilenetv2 ssd 0.25 model, the train.prototxt is generated by ssd/gen_model.py with patrameter of size to be 0.25. But the loss is down and keep to between 7 and 8. Do you have any ideas?

IOError: [Errno 2] No such file or directory: 'output/Conv_bn_moving_mean.dat'

I1228 21:56:35.993526 22146 net.cpp:228] data_input_0_split does not need backward computation.
I1228 21:56:35.993530 22146 net.cpp:228] input does not need backward computation.
I1228 21:56:35.993532 22146 net.cpp:270] This network produces output detection_out
I1228 21:56:35.993688 22146 net.cpp:283] Network initialization done.
Conv
conv
Conv/bn
conv
Traceback (most recent call last):
File "load_caffe_weights.py", line 82, in
load_data(net_deploy)
File "load_caffe_weights.py", line 29, in load_data
net.params[key][0].data[...] = load_weights(prefix + '_moving_mean.dat')
File "load_caffe_weights.py", line 15, in load_weights
weights = np.fromfile(path, dtype=np.float32)
IOError: [Errno 2] No such file or directory: 'output/Conv_bn_moving_mean.dat'

The demo error

Hi, chuanqi:
I run the demo to test images, but there occurred a error, error == cudaSuccess (48 vs. 0) no kernel image is available for execution on the device
The code was run on jetson tx2, with CUDA9 and cudnn7, I wonder whether the error was caused by the CUDA version.
Thanks!

training loss doesn't decrease after several hundreds steps

thanks for your great work! i am using your train.prototxt to train my dataset and finetunn with your deploy_vox.caffemodel, but loss decreases from 16 to 11 with several steps, but then it doesn't decrease anymore.
Then I use your train.prototxt and deploy_voc.caffemodel to continue training the network by using VOC2007 and VOC2012 dataset, the original loss is 7.2, but suddenly changed to 13 after 10 steps and then keep around 11. so the problem happened on my own dataset also appear on your dataset.

could you help explain why? thanks! by the way, the inference is good when using you deploy.prototxt and deploy_voc.caffemodel.

the following is the log from training VOC dataset by using your train.prototxt and deploy_voc.caffemodel
I0913 22:55:11.996824 72654 solver.cpp:259] Train net output #0: mbox_loss = 7.28595 (* 1 = 7.28595 loss)
I0913 22:55:11.996837 72654 sgd_solver.cpp:138] Iteration 0, lr = 1e-05
I0913 22:55:47.544627 72654 solver.cpp:243] Iteration 10, loss = 13.3957
I0913 22:55:47.544867 72654 solver.cpp:259] Train net output #0: mbox_loss = 13.1836 (* 1 = 13.1836 loss)
I0913 22:55:47.544908 72654 sgd_solver.cpp:138] Iteration 10, lr = 1e-05
I0913 22:56:21.804133 72654 solver.cpp:243] Iteration 20, loss = 13.1605
I0913 22:56:21.804299 72654 solver.cpp:259] Train net output #0: mbox_loss = 13.5674 (* 1 = 13.5674 loss)
I0913 22:56:21.804311 72654 sgd_solver.cpp:138] Iteration 20, lr = 1e-05
I0913 22:56:55.918035 72654 solver.cpp:243] Iteration 30, loss = 13.1157
I0913 22:56:55.918321 72654 solver.cpp:259] Train net output #0: mbox_loss = 13.2727 (* 1 = 13.2727 loss)

dump_tensorflow_weights.py with error

BoxPredictor_1/BoxEncodingPredictor/weights Traceback (most recent call last): File "dump_tensorflow_weights.py", line 54, in <module> tmp = caffe_weights.reshape(n, 4, -1) ValueError: cannot reshape array of size 276480 into shape (68,4,newaxis)
The bug seems at line 53
n = caffe_bias.shape[0] / 4
maybe have to be changed as
n = caffe_weights.shape[0] / 4

result box has offset for ssdlite_mobilenet_v2_coco net

i convert ssdlite_mobilenet_v2_coco_2018_05_09 model from tensorflow to caffe, the result boxs get from caffe (I use your ssd caffe with relu6 version) all have some offset, any suggest about the issuse?

ssd-3
ssd-2
ssd-1

I get there is has_tf_pad in base_conv_layer.cpp, how to add this param into prototxt?????

Error when creating deploy.caffemodel with two classes using load_caffe_weights.py, ValueError: cannot reshape array of size 157248 into shape (6,576,1,1)

dump_tensorflow_weights.py works well with the downloaded ssdlite_mobilenet_v2_coco_2018_05_09 model. But stuck at converting it to caffe model using the load_caffe_weights.py script

Traceback (most recent call last):
File "load_caffe_weights.py", line 82, in
load_data(net_deploy)
File "load_caffe_weights.py", line 74, in load_data
net.params[key][0].data[...] = load_weights(prefix + '_weights.dat', net.params[key][0].data.shape)
File "load_caffe_weights.py", line 17, in load_weights
weights = np.fromfile(path, dtype=np.float32).reshape(shape)
ValueError: cannot reshape array of size 157248 into shape (6,576,1,1)

Issue converting retrained ssd_mobilenet_v2 to caffe model

I am able to convert the official ssd_mobilenet_v2 trained on MS COCO to a caffe model, by using dump_tensorflow_weight and load_caffe_weights afterwards.

However, when I want to convert my fine-tuned model based on this checkpoint, which only contains one class (apart from background), I fail at the load_caffe_weights conversion step.

I am not exactly sure what I have to change in order to reflect the variation in output classes. So first I changed the number of classes in dump_tensorflow_weights from 91 to 2 in line 56 and 97. I also created a new prototxt file by using: "python gen_model.py -s deploy -c 2 >deploy.prototxt"

When I execute load_caffe_weights, I get the following error: "IOError: [Errno 2] No such file or directory: 'output/Conv_bn_moving_mean.dat'" The "Conv_bn" output file has indeed no "moving_mean" variation, only "beta" and "gamma".

Am I missing some changes to load_caffe_weights? Thanks in advance for any help on this, I am really stuck here and this is an important project for me..

PS: I uploaded my model here: https://transfer.sh/6qrxJ/platedetection.pb. Its only class apart from background is "plate".

Problem in training the model

Hi,
I am trying to train the model with a face detection dataset "WIDER_FACE" following the SFD tricks with SSD. The changes I did to your model is that I used different layers for detection and different input image size 640x640. I also used the converted tensorflow model as pretrained model. However, while training, the validation accuracy is not increasing, it is oscillating around very low values (0.09 - 0.11) and never increase.
Note that I did the same changes and used this dataset to train your MobileNetV1 model in MobileNet-SSD repo, and it was working fine, so after 5K iterations the validation accuracy was around 0.3. But following the same approach with MobileNetV2 model didn't help, and the validation accuracy stays very low for long time which indicates that the model is not learning.
Here you can find all the files I used for training, in addition to the output log file of the training.
https://drive.google.com/drive/folders/1G3Kb81POy7kplCp3PMhzEGSJ9OHODuW6?usp=sharing

Thank you.

Wrong prediction when running caffe_demo for ssdlite using coco dataset

Awesome idea of execution.
After following all the steps, when i execute the demo_caffe.py project, facing below issue
I0530 12:31:54.303753 14700 layer_factory.hpp:77] Creating layer Conv/relu
F0530 12:31:54.303789 14700 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: ReLU6 (known types: AbsVal, Accuracy, AnnotatedData, ArgMax, BNLL, BatchNorm, BatchReindex, Bias, Concat, ContrastiveLoss, Convolution, Crop, Data, Deconvolution, DetectionEvaluate, DetectionOutput, Dropout, DummyData, ELU, Eltwise, Embed, EuclideanLoss, Exp, Filter, Flatten, HDF5Data, HDF5Output, HingeLoss, Im2col, ImageData, InfogainLoss, InnerProduct, Input, LRN, LSTM, LSTMUnit, Log, MVN, MemoryData, MultiBoxLoss, MultinomialLogisticLoss, Normalize, PReLU, Parameter, Permute, Pooling, Power, PriorBox, RNN, ReLU, Reduction, Reshape, SPP, Scale, Sigmoid, SigmoidCrossEntropyLoss, Silence, Slice, SmoothL1Loss, Softmax, SoftmaxWithLoss, Split, TanH, Threshold, Tile, VideoData, WindowData)
*** Check failure stack trace: ***
Aborted (core dumped)
caffe_ssdlite

Mobilenetssd512

Thanks for a great work.
Do you have a mobilenet ssdlite512 or 608architecture ?
I want to know the minimum and Maximum size for each layers about higher resolution network

Input size issue

Thanks for this Great work. I notice that the input size of mobilenet-SSDlite is 320x320 in the original paper, wheras it is 300x300 in the deploy.prototxt. Why are they different?

Out of Memory?

Has anyone run into this issue? I execute all of the commands as instructed but when I get to the execution of "python load_caffe_weights.py" I get the following error:
F0618 16:07:32.816975 43606 cudnn_conv_layer.cpp:52] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
Aborted (core dumped)

Here is the full runtime message:
$ python load_caffe_weights.py
...
I0618 16:07:31.268625 43606 layer_factory.hpp:77] Creating layer input
I0618 16:07:31.268640 43606 net.cpp:100] Creating Layer input
I0618 16:07:31.268646 43606 net.cpp:408] input -> data
I0618 16:07:31.268671 43606 net.cpp:150] Setting up input
I0618 16:07:31.268677 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268682 43606 net.cpp:165] Memory required for data: 1080000
I0618 16:07:31.268687 43606 layer_factory.hpp:77] Creating layer data_input_0_split
I0618 16:07:31.268697 43606 net.cpp:100] Creating Layer data_input_0_split
I0618 16:07:31.268702 43606 net.cpp:434] data_input_0_split <- data
I0618 16:07:31.268708 43606 net.cpp:408] data_input_0_split -> data_input_0_split_0
I0618 16:07:31.268718 43606 net.cpp:408] data_input_0_split -> data_input_0_split_1
I0618 16:07:31.268726 43606 net.cpp:408] data_input_0_split -> data_input_0_split_2
I0618 16:07:31.268734 43606 net.cpp:408] data_input_0_split -> data_input_0_split_3
I0618 16:07:31.268743 43606 net.cpp:408] data_input_0_split -> data_input_0_split_4
I0618 16:07:31.268749 43606 net.cpp:408] data_input_0_split -> data_input_0_split_5
I0618 16:07:31.268757 43606 net.cpp:408] data_input_0_split -> data_input_0_split_6
I0618 16:07:31.268767 43606 net.cpp:150] Setting up data_input_0_split
I0618 16:07:31.268774 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268779 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268785 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268791 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268796 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268802 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268807 43606 net.cpp:157] Top shape: 1 3 300 300 (270000)
I0618 16:07:31.268812 43606 net.cpp:165] Memory required for data: 8640000
I0618 16:07:31.268816 43606 layer_factory.hpp:77] Creating layer Conv
I0618 16:07:31.268827 43606 net.cpp:100] Creating Layer Conv
I0618 16:07:31.268833 43606 net.cpp:434] Conv <- data_input_0_split_0
I0618 16:07:31.268841 43606 net.cpp:408] Conv -> Conv
I0618 16:07:31.691318 43606 net.cpp:150] Setting up Conv
I0618 16:07:31.691352 43606 net.cpp:157] Top shape: 1 32 150 150 (720000)
I0618 16:07:31.691355 43606 net.cpp:165] Memory required for data: 11520000
I0618 16:07:31.691367 43606 layer_factory.hpp:77] Creating layer Conv/relu
I0618 16:07:31.691376 43606 net.cpp:100] Creating Layer Conv/relu
I0618 16:07:31.691380 43606 net.cpp:434] Conv/relu <- Conv
I0618 16:07:31.691385 43606 net.cpp:395] Conv/relu -> Conv (in-place)
I0618 16:07:31.691552 43606 net.cpp:150] Setting up Conv/relu
I0618 16:07:31.691560 43606 net.cpp:157] Top shape: 1 32 150 150 (720000)
I0618 16:07:31.691563 43606 net.cpp:165] Memory required for data: 14400000
I0618 16:07:31.691570 43606 layer_factory.hpp:77] Creating layer conv/depthwise
I0618 16:07:31.691586 43606 net.cpp:100] Creating Layer conv/depthwise
I0618 16:07:31.691591 43606 net.cpp:434] conv/depthwise <- Conv
I0618 16:07:31.691598 43606 net.cpp:408] conv/depthwise -> conv/depthwise
I0618 16:07:31.709228 43606 net.cpp:150] Setting up conv/depthwise
I0618 16:07:31.709251 43606 net.cpp:157] Top shape: 1 32 150 150 (720000)
I0618 16:07:31.709254 43606 net.cpp:165] Memory required for data: 17280000
I0618 16:07:31.709265 43606 layer_factory.hpp:77] Creating layer conv/depthwise/relu
I0618 16:07:31.709275 43606 net.cpp:100] Creating Layer conv/depthwise/relu
I0618 16:07:31.709278 43606 net.cpp:434] conv/depthwise/relu <- conv/depthwise
I0618 16:07:31.709283 43606 net.cpp:395] conv/depthwise/relu -> conv/depthwise (in-place)
I0618 16:07:31.709456 43606 net.cpp:150] Setting up conv/depthwise/relu
I0618 16:07:31.709463 43606 net.cpp:157] Top shape: 1 32 150 150 (720000)
I0618 16:07:31.709467 43606 net.cpp:165] Memory required for data: 20160000
I0618 16:07:31.709472 43606 layer_factory.hpp:77] Creating layer conv/project
I0618 16:07:31.709491 43606 net.cpp:100] Creating Layer conv/project
I0618 16:07:31.709496 43606 net.cpp:434] conv/project <- conv/depthwise
I0618 16:07:31.709503 43606 net.cpp:408] conv/project -> conv/project
I0618 16:07:31.710234 43606 net.cpp:150] Setting up conv/project
I0618 16:07:31.710247 43606 net.cpp:157] Top shape: 1 16 150 150 (360000)
I0618 16:07:31.710253 43606 net.cpp:165] Memory required for data: 21600000
I0618 16:07:31.710264 43606 layer_factory.hpp:77] Creating layer conv_1/expand
I0618 16:07:31.710279 43606 net.cpp:100] Creating Layer conv_1/expand
I0618 16:07:31.710284 43606 net.cpp:434] conv_1/expand <- conv/project
I0618 16:07:31.710294 43606 net.cpp:408] conv_1/expand -> conv_1/expand
I0618 16:07:31.710984 43606 net.cpp:150] Setting up conv_1/expand
I0618 16:07:31.710995 43606 net.cpp:157] Top shape: 1 96 150 150 (2160000)
I0618 16:07:31.711000 43606 net.cpp:165] Memory required for data: 30240000
I0618 16:07:31.711009 43606 layer_factory.hpp:77] Creating layer conv_1/expand/relu
I0618 16:07:31.711021 43606 net.cpp:100] Creating Layer conv_1/expand/relu
I0618 16:07:31.711026 43606 net.cpp:434] conv_1/expand/relu <- conv_1/expand
I0618 16:07:31.711035 43606 net.cpp:395] conv_1/expand/relu -> conv_1/expand (in-place)
I0618 16:07:31.711334 43606 net.cpp:150] Setting up conv_1/expand/relu
I0618 16:07:31.711343 43606 net.cpp:157] Top shape: 1 96 150 150 (2160000)
I0618 16:07:31.711347 43606 net.cpp:165] Memory required for data: 38880000
I0618 16:07:31.711352 43606 layer_factory.hpp:77] Creating layer conv_1/depthwise
I0618 16:07:31.711367 43606 net.cpp:100] Creating Layer conv_1/depthwise
I0618 16:07:31.711372 43606 net.cpp:434] conv_1/depthwise <- conv_1/expand
I0618 16:07:31.711381 43606 net.cpp:408] conv_1/depthwise -> conv_1/depthwise
I0618 16:07:31.769933 43606 net.cpp:150] Setting up conv_1/depthwise
I0618 16:07:31.769959 43606 net.cpp:157] Top shape: 1 96 75 75 (540000)
I0618 16:07:31.769963 43606 net.cpp:165] Memory required for data: 41040000
I0618 16:07:31.769979 43606 layer_factory.hpp:77] Creating layer conv_1/depthwise/relu
I0618 16:07:31.769989 43606 net.cpp:100] Creating Layer conv_1/depthwise/relu
I0618 16:07:31.769994 43606 net.cpp:434] conv_1/depthwise/relu <- conv_1/depthwise
I0618 16:07:31.770002 43606 net.cpp:395] conv_1/depthwise/relu -> conv_1/depthwise (in-place)
I0618 16:07:31.770225 43606 net.cpp:150] Setting up conv_1/depthwise/relu
I0618 16:07:31.770233 43606 net.cpp:157] Top shape: 1 96 75 75 (540000)
I0618 16:07:31.770238 43606 net.cpp:165] Memory required for data: 43200000
I0618 16:07:31.770246 43606 layer_factory.hpp:77] Creating layer conv_1/project
I0618 16:07:31.770262 43606 net.cpp:100] Creating Layer conv_1/project
I0618 16:07:31.770267 43606 net.cpp:434] conv_1/project <- conv_1/depthwise
I0618 16:07:31.770279 43606 net.cpp:408] conv_1/project -> conv_1/project
I0618 16:07:31.771124 43606 net.cpp:150] Setting up conv_1/project
I0618 16:07:31.771138 43606 net.cpp:157] Top shape: 1 24 75 75 (135000)
I0618 16:07:31.771142 43606 net.cpp:165] Memory required for data: 43740000
I0618 16:07:31.771152 43606 layer_factory.hpp:77] Creating layer conv_1/project_conv_1/project_0_split
I0618 16:07:31.771163 43606 net.cpp:100] Creating Layer conv_1/project_conv_1/project_0_split
I0618 16:07:31.771169 43606 net.cpp:434] conv_1/project_conv_1/project_0_split <- conv_1/project
I0618 16:07:31.771176 43606 net.cpp:408] conv_1/project_conv_1/project_0_split -> conv_1/project_conv_1/project_0_split_0
I0618 16:07:31.771188 43606 net.cpp:408] conv_1/project_conv_1/project_0_split -> conv_1/project_conv_1/project_0_split_1
I0618 16:07:31.771198 43606 net.cpp:150] Setting up conv_1/project_conv_1/project_0_split
I0618 16:07:31.771204 43606 net.cpp:157] Top shape: 1 24 75 75 (135000)
I0618 16:07:31.771209 43606 net.cpp:157] Top shape: 1 24 75 75 (135000)
I0618 16:07:31.771214 43606 net.cpp:165] Memory required for data: 44820000
I0618 16:07:31.771219 43606 layer_factory.hpp:77] Creating layer conv_2/expand
I0618 16:07:31.771229 43606 net.cpp:100] Creating Layer conv_2/expand
I0618 16:07:31.771235 43606 net.cpp:434] conv_2/expand <- conv_1/project_conv_1/project_0_split_0
I0618 16:07:31.771245 43606 net.cpp:408] conv_2/expand -> conv_2/expand
I0618 16:07:31.772011 43606 net.cpp:150] Setting up conv_2/expand
I0618 16:07:31.772025 43606 net.cpp:157] Top shape: 1 144 75 75 (810000)
I0618 16:07:31.772032 43606 net.cpp:165] Memory required for data: 48060000
I0618 16:07:31.772040 43606 layer_factory.hpp:77] Creating layer conv_2/expand/relu
I0618 16:07:31.772049 43606 net.cpp:100] Creating Layer conv_2/expand/relu
I0618 16:07:31.772055 43606 net.cpp:434] conv_2/expand/relu <- conv_2/expand
I0618 16:07:31.772064 43606 net.cpp:395] conv_2/expand/relu -> conv_2/expand (in-place)
I0618 16:07:31.772256 43606 net.cpp:150] Setting up conv_2/expand/relu
I0618 16:07:31.772262 43606 net.cpp:157] Top shape: 1 144 75 75 (810000)
I0618 16:07:31.772267 43606 net.cpp:165] Memory required for data: 51300000
I0618 16:07:31.772272 43606 layer_factory.hpp:77] Creating layer conv_2/depthwise
I0618 16:07:31.772285 43606 net.cpp:100] Creating Layer conv_2/depthwise
I0618 16:07:31.772291 43606 net.cpp:434] conv_2/depthwise <- conv_2/expand
I0618 16:07:31.772300 43606 net.cpp:408] conv_2/depthwise -> conv_2/depthwise
I0618 16:07:31.868156 43606 net.cpp:150] Setting up conv_2/depthwise
I0618 16:07:31.868180 43606 net.cpp:157] Top shape: 1 144 75 75 (810000)
I0618 16:07:31.868185 43606 net.cpp:165] Memory required for data: 54540000
I0618 16:07:31.868199 43606 layer_factory.hpp:77] Creating layer conv_2/depthwise/relu
I0618 16:07:31.868218 43606 net.cpp:100] Creating Layer conv_2/depthwise/relu
I0618 16:07:31.868225 43606 net.cpp:434] conv_2/depthwise/relu <- conv_2/depthwise
I0618 16:07:31.868235 43606 net.cpp:395] conv_2/depthwise/relu -> conv_2/depthwise (in-place)
I0618 16:07:31.868607 43606 net.cpp:150] Setting up conv_2/depthwise/relu
I0618 16:07:31.868618 43606 net.cpp:157] Top shape: 1 144 75 75 (810000)
I0618 16:07:31.868623 43606 net.cpp:165] Memory required for data: 57780000
I0618 16:07:31.868629 43606 layer_factory.hpp:77] Creating layer conv_2/project
I0618 16:07:31.868647 43606 net.cpp:100] Creating Layer conv_2/project
I0618 16:07:31.868654 43606 net.cpp:434] conv_2/project <- conv_2/depthwise
I0618 16:07:31.868662 43606 net.cpp:408] conv_2/project -> conv_2/project
I0618 16:07:31.869462 43606 net.cpp:150] Setting up conv_2/project
I0618 16:07:31.869473 43606 net.cpp:157] Top shape: 1 24 75 75 (135000)
I0618 16:07:31.869478 43606 net.cpp:165] Memory required for data: 58320000
I0618 16:07:31.869493 43606 layer_factory.hpp:77] Creating layer conv_2/sum
I0618 16:07:31.869503 43606 net.cpp:100] Creating Layer conv_2/sum
I0618 16:07:31.869508 43606 net.cpp:434] conv_2/sum <- conv_1/project_conv_1/project_0_split_1
I0618 16:07:31.869515 43606 net.cpp:434] conv_2/sum <- conv_2/project
I0618 16:07:31.869523 43606 net.cpp:408] conv_2/sum -> conv_2
I0618 16:07:31.869544 43606 net.cpp:150] Setting up conv_2/sum
I0618 16:07:31.869549 43606 net.cpp:157] Top shape: 1 24 75 75 (135000)
I0618 16:07:31.869552 43606 net.cpp:165] Memory required for data: 58860000
I0618 16:07:31.869557 43606 layer_factory.hpp:77] Creating layer conv_3/expand
I0618 16:07:31.869570 43606 net.cpp:100] Creating Layer conv_3/expand
I0618 16:07:31.869575 43606 net.cpp:434] conv_3/expand <- conv_2
I0618 16:07:31.869581 43606 net.cpp:408] conv_3/expand -> conv_3/expand
I0618 16:07:31.870421 43606 net.cpp:150] Setting up conv_3/expand
I0618 16:07:31.870434 43606 net.cpp:157] Top shape: 1 144 75 75 (810000)
I0618 16:07:31.870437 43606 net.cpp:165] Memory required for data: 62100000
I0618 16:07:31.870446 43606 layer_factory.hpp:77] Creating layer conv_3/expand/relu
I0618 16:07:31.870457 43606 net.cpp:100] Creating Layer conv_3/expand/relu
I0618 16:07:31.870463 43606 net.cpp:434] conv_3/expand/relu <- conv_3/expand
I0618 16:07:31.870471 43606 net.cpp:395] conv_3/expand/relu -> conv_3/expand (in-place)
I0618 16:07:31.870661 43606 net.cpp:150] Setting up conv_3/expand/relu
I0618 16:07:31.870668 43606 net.cpp:157] Top shape: 1 144 75 75 (810000)
I0618 16:07:31.870672 43606 net.cpp:165] Memory required for data: 65340000
I0618 16:07:31.870678 43606 layer_factory.hpp:77] Creating layer conv_3/depthwise
I0618 16:07:31.870692 43606 net.cpp:100] Creating Layer conv_3/depthwise
I0618 16:07:31.870697 43606 net.cpp:434] conv_3/depthwise <- conv_3/expand
I0618 16:07:31.870707 43606 net.cpp:408] conv_3/depthwise -> conv_3/depthwise
I0618 16:07:31.974457 43606 net.cpp:150] Setting up conv_3/depthwise
I0618 16:07:31.974479 43606 net.cpp:157] Top shape: 1 144 38 38 (207936)
I0618 16:07:31.974483 43606 net.cpp:165] Memory required for data: 66171744
I0618 16:07:31.974491 43606 layer_factory.hpp:77] Creating layer conv_3/depthwise/relu
I0618 16:07:31.974501 43606 net.cpp:100] Creating Layer conv_3/depthwise/relu
I0618 16:07:31.974505 43606 net.cpp:434] conv_3/depthwise/relu <- conv_3/depthwise
I0618 16:07:31.974511 43606 net.cpp:395] conv_3/depthwise/relu -> conv_3/depthwise (in-place)
I0618 16:07:31.974733 43606 net.cpp:150] Setting up conv_3/depthwise/relu
I0618 16:07:31.974742 43606 net.cpp:157] Top shape: 1 144 38 38 (207936)
I0618 16:07:31.974747 43606 net.cpp:165] Memory required for data: 67003488
I0618 16:07:31.974752 43606 layer_factory.hpp:77] Creating layer conv_3/project
I0618 16:07:31.974771 43606 net.cpp:100] Creating Layer conv_3/project
I0618 16:07:31.974776 43606 net.cpp:434] conv_3/project <- conv_3/depthwise
I0618 16:07:31.974784 43606 net.cpp:408] conv_3/project -> conv_3/project
I0618 16:07:31.975775 43606 net.cpp:150] Setting up conv_3/project
I0618 16:07:31.975788 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:31.975793 43606 net.cpp:165] Memory required for data: 67188320
I0618 16:07:31.975802 43606 layer_factory.hpp:77] Creating layer conv_3/project_conv_3/project_0_split
I0618 16:07:31.975813 43606 net.cpp:100] Creating Layer conv_3/project_conv_3/project_0_split
I0618 16:07:31.975819 43606 net.cpp:434] conv_3/project_conv_3/project_0_split <- conv_3/project
I0618 16:07:31.975828 43606 net.cpp:408] conv_3/project_conv_3/project_0_split -> conv_3/project_conv_3/project_0_split_0
I0618 16:07:31.975838 43606 net.cpp:408] conv_3/project_conv_3/project_0_split -> conv_3/project_conv_3/project_0_split_1
I0618 16:07:31.975849 43606 net.cpp:150] Setting up conv_3/project_conv_3/project_0_split
I0618 16:07:31.975855 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:31.975862 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:31.975867 43606 net.cpp:165] Memory required for data: 67557984
I0618 16:07:31.975870 43606 layer_factory.hpp:77] Creating layer conv_4/expand
I0618 16:07:31.975883 43606 net.cpp:100] Creating Layer conv_4/expand
I0618 16:07:31.975888 43606 net.cpp:434] conv_4/expand <- conv_3/project_conv_3/project_0_split_0
I0618 16:07:31.975896 43606 net.cpp:408] conv_4/expand -> conv_4/expand
I0618 16:07:31.976830 43606 net.cpp:150] Setting up conv_4/expand
I0618 16:07:31.976842 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:31.976847 43606 net.cpp:165] Memory required for data: 68666976
I0618 16:07:31.976857 43606 layer_factory.hpp:77] Creating layer conv_4/expand/relu
I0618 16:07:31.976866 43606 net.cpp:100] Creating Layer conv_4/expand/relu
I0618 16:07:31.976872 43606 net.cpp:434] conv_4/expand/relu <- conv_4/expand
I0618 16:07:31.976881 43606 net.cpp:395] conv_4/expand/relu -> conv_4/expand (in-place)
I0618 16:07:31.977084 43606 net.cpp:150] Setting up conv_4/expand/relu
I0618 16:07:31.977090 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:31.977095 43606 net.cpp:165] Memory required for data: 69775968
I0618 16:07:31.977102 43606 layer_factory.hpp:77] Creating layer conv_4/depthwise
I0618 16:07:31.977113 43606 net.cpp:100] Creating Layer conv_4/depthwise
I0618 16:07:31.977118 43606 net.cpp:434] conv_4/depthwise <- conv_4/expand
I0618 16:07:31.977128 43606 net.cpp:408] conv_4/depthwise -> conv_4/depthwise
I0618 16:07:32.114236 43606 net.cpp:150] Setting up conv_4/depthwise
I0618 16:07:32.114261 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.114264 43606 net.cpp:165] Memory required for data: 70884960
I0618 16:07:32.114274 43606 layer_factory.hpp:77] Creating layer conv_4/depthwise/relu
I0618 16:07:32.114284 43606 net.cpp:100] Creating Layer conv_4/depthwise/relu
I0618 16:07:32.114287 43606 net.cpp:434] conv_4/depthwise/relu <- conv_4/depthwise
I0618 16:07:32.114293 43606 net.cpp:395] conv_4/depthwise/relu -> conv_4/depthwise (in-place)
I0618 16:07:32.114706 43606 net.cpp:150] Setting up conv_4/depthwise/relu
I0618 16:07:32.114718 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.114722 43606 net.cpp:165] Memory required for data: 71993952
I0618 16:07:32.114729 43606 layer_factory.hpp:77] Creating layer conv_4/project
I0618 16:07:32.114747 43606 net.cpp:100] Creating Layer conv_4/project
I0618 16:07:32.114753 43606 net.cpp:434] conv_4/project <- conv_4/depthwise
I0618 16:07:32.114763 43606 net.cpp:408] conv_4/project -> conv_4/project
I0618 16:07:32.115628 43606 net.cpp:150] Setting up conv_4/project
I0618 16:07:32.115640 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:32.115645 43606 net.cpp:165] Memory required for data: 72178784
I0618 16:07:32.115654 43606 layer_factory.hpp:77] Creating layer conv_4/sum
I0618 16:07:32.115665 43606 net.cpp:100] Creating Layer conv_4/sum
I0618 16:07:32.115671 43606 net.cpp:434] conv_4/sum <- conv_3/project_conv_3/project_0_split_1
I0618 16:07:32.115679 43606 net.cpp:434] conv_4/sum <- conv_4/project
I0618 16:07:32.115685 43606 net.cpp:408] conv_4/sum -> conv_4
I0618 16:07:32.115697 43606 net.cpp:150] Setting up conv_4/sum
I0618 16:07:32.115703 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:32.115708 43606 net.cpp:165] Memory required for data: 72363616
I0618 16:07:32.115713 43606 layer_factory.hpp:77] Creating layer conv_4_conv_4/sum_0_split
I0618 16:07:32.115720 43606 net.cpp:100] Creating Layer conv_4_conv_4/sum_0_split
I0618 16:07:32.115725 43606 net.cpp:434] conv_4_conv_4/sum_0_split <- conv_4
I0618 16:07:32.115734 43606 net.cpp:408] conv_4_conv_4/sum_0_split -> conv_4_conv_4/sum_0_split_0
I0618 16:07:32.115743 43606 net.cpp:408] conv_4_conv_4/sum_0_split -> conv_4_conv_4/sum_0_split_1
I0618 16:07:32.115753 43606 net.cpp:150] Setting up conv_4_conv_4/sum_0_split
I0618 16:07:32.115759 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:32.115764 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:32.115769 43606 net.cpp:165] Memory required for data: 72733280
I0618 16:07:32.115775 43606 layer_factory.hpp:77] Creating layer conv_5/expand
I0618 16:07:32.115794 43606 net.cpp:100] Creating Layer conv_5/expand
I0618 16:07:32.115799 43606 net.cpp:434] conv_5/expand <- conv_4_conv_4/sum_0_split_0
I0618 16:07:32.115808 43606 net.cpp:408] conv_5/expand -> conv_5/expand
I0618 16:07:32.116652 43606 net.cpp:150] Setting up conv_5/expand
I0618 16:07:32.116662 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.116667 43606 net.cpp:165] Memory required for data: 73842272
I0618 16:07:32.116677 43606 layer_factory.hpp:77] Creating layer conv_5/expand/relu
I0618 16:07:32.116685 43606 net.cpp:100] Creating Layer conv_5/expand/relu
I0618 16:07:32.116693 43606 net.cpp:434] conv_5/expand/relu <- conv_5/expand
I0618 16:07:32.116698 43606 net.cpp:395] conv_5/expand/relu -> conv_5/expand (in-place)
I0618 16:07:32.116886 43606 net.cpp:150] Setting up conv_5/expand/relu
I0618 16:07:32.116892 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.116896 43606 net.cpp:165] Memory required for data: 74951264
I0618 16:07:32.116902 43606 layer_factory.hpp:77] Creating layer conv_5/depthwise
I0618 16:07:32.116915 43606 net.cpp:100] Creating Layer conv_5/depthwise
I0618 16:07:32.116920 43606 net.cpp:434] conv_5/depthwise <- conv_5/expand
I0618 16:07:32.116930 43606 net.cpp:408] conv_5/depthwise -> conv_5/depthwise
I0618 16:07:32.262675 43606 net.cpp:150] Setting up conv_5/depthwise
I0618 16:07:32.262698 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.262701 43606 net.cpp:165] Memory required for data: 76060256
I0618 16:07:32.262717 43606 layer_factory.hpp:77] Creating layer conv_5/depthwise/relu
I0618 16:07:32.262727 43606 net.cpp:100] Creating Layer conv_5/depthwise/relu
I0618 16:07:32.262732 43606 net.cpp:434] conv_5/depthwise/relu <- conv_5/depthwise
I0618 16:07:32.262739 43606 net.cpp:395] conv_5/depthwise/relu -> conv_5/depthwise (in-place)
I0618 16:07:32.262974 43606 net.cpp:150] Setting up conv_5/depthwise/relu
I0618 16:07:32.262981 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.262984 43606 net.cpp:165] Memory required for data: 77169248
I0618 16:07:32.262989 43606 layer_factory.hpp:77] Creating layer conv_5/project
I0618 16:07:32.263005 43606 net.cpp:100] Creating Layer conv_5/project
I0618 16:07:32.263010 43606 net.cpp:434] conv_5/project <- conv_5/depthwise
I0618 16:07:32.263020 43606 net.cpp:408] conv_5/project -> conv_5/project
I0618 16:07:32.263936 43606 net.cpp:150] Setting up conv_5/project
I0618 16:07:32.263948 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:32.263953 43606 net.cpp:165] Memory required for data: 77354080
I0618 16:07:32.263962 43606 layer_factory.hpp:77] Creating layer conv_5/sum
I0618 16:07:32.263973 43606 net.cpp:100] Creating Layer conv_5/sum
I0618 16:07:32.263979 43606 net.cpp:434] conv_5/sum <- conv_4_conv_4/sum_0_split_1
I0618 16:07:32.263986 43606 net.cpp:434] conv_5/sum <- conv_5/project
I0618 16:07:32.263993 43606 net.cpp:408] conv_5/sum -> conv_5
I0618 16:07:32.264005 43606 net.cpp:150] Setting up conv_5/sum
I0618 16:07:32.264012 43606 net.cpp:157] Top shape: 1 32 38 38 (46208)
I0618 16:07:32.264017 43606 net.cpp:165] Memory required for data: 77538912
I0618 16:07:32.264022 43606 layer_factory.hpp:77] Creating layer conv_6/expand
I0618 16:07:32.264036 43606 net.cpp:100] Creating Layer conv_6/expand
I0618 16:07:32.264042 43606 net.cpp:434] conv_6/expand <- conv_5
I0618 16:07:32.264050 43606 net.cpp:408] conv_6/expand -> conv_6/expand
I0618 16:07:32.264948 43606 net.cpp:150] Setting up conv_6/expand
I0618 16:07:32.264961 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.264964 43606 net.cpp:165] Memory required for data: 78647904
I0618 16:07:32.264973 43606 layer_factory.hpp:77] Creating layer conv_6/expand/relu
I0618 16:07:32.264983 43606 net.cpp:100] Creating Layer conv_6/expand/relu
I0618 16:07:32.264989 43606 net.cpp:434] conv_6/expand/relu <- conv_6/expand
I0618 16:07:32.264995 43606 net.cpp:395] conv_6/expand/relu -> conv_6/expand (in-place)
I0618 16:07:32.265442 43606 net.cpp:150] Setting up conv_6/expand/relu
I0618 16:07:32.265455 43606 net.cpp:157] Top shape: 1 192 38 38 (277248)
I0618 16:07:32.265460 43606 net.cpp:165] Memory required for data: 79756896
I0618 16:07:32.265465 43606 layer_factory.hpp:77] Creating layer conv_6/depthwise
I0618 16:07:32.265480 43606 net.cpp:100] Creating Layer conv_6/depthwise
I0618 16:07:32.265486 43606 net.cpp:434] conv_6/depthwise <- conv_6/expand
I0618 16:07:32.265496 43606 net.cpp:408] conv_6/depthwise -> conv_6/depthwise
I0618 16:07:32.416790 43606 net.cpp:150] Setting up conv_6/depthwise
I0618 16:07:32.416812 43606 net.cpp:157] Top shape: 1 192 19 19 (69312)
I0618 16:07:32.416815 43606 net.cpp:165] Memory required for data: 80034144
I0618 16:07:32.416826 43606 layer_factory.hpp:77] Creating layer conv_6/depthwise/relu
I0618 16:07:32.416834 43606 net.cpp:100] Creating Layer conv_6/depthwise/relu
I0618 16:07:32.416838 43606 net.cpp:434] conv_6/depthwise/relu <- conv_6/depthwise
I0618 16:07:32.416846 43606 net.cpp:395] conv_6/depthwise/relu -> conv_6/depthwise (in-place)
I0618 16:07:32.417052 43606 net.cpp:150] Setting up conv_6/depthwise/relu
I0618 16:07:32.417058 43606 net.cpp:157] Top shape: 1 192 19 19 (69312)
I0618 16:07:32.417062 43606 net.cpp:165] Memory required for data: 80311392
I0618 16:07:32.417064 43606 layer_factory.hpp:77] Creating layer conv_6/project
I0618 16:07:32.417080 43606 net.cpp:100] Creating Layer conv_6/project
I0618 16:07:32.417085 43606 net.cpp:434] conv_6/project <- conv_6/depthwise
I0618 16:07:32.417096 43606 net.cpp:408] conv_6/project -> conv_6/project
I0618 16:07:32.418278 43606 net.cpp:150] Setting up conv_6/project
I0618 16:07:32.418292 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.418294 43606 net.cpp:165] Memory required for data: 80403808
I0618 16:07:32.418300 43606 layer_factory.hpp:77] Creating layer conv_6/project_conv_6/project_0_split
I0618 16:07:32.418308 43606 net.cpp:100] Creating Layer conv_6/project_conv_6/project_0_split
I0618 16:07:32.418311 43606 net.cpp:434] conv_6/project_conv_6/project_0_split <- conv_6/project
I0618 16:07:32.418316 43606 net.cpp:408] conv_6/project_conv_6/project_0_split -> conv_6/project_conv_6/project_0_split_0
I0618 16:07:32.418323 43606 net.cpp:408] conv_6/project_conv_6/project_0_split -> conv_6/project_conv_6/project_0_split_1
I0618 16:07:32.418329 43606 net.cpp:150] Setting up conv_6/project_conv_6/project_0_split
I0618 16:07:32.418334 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.418340 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.418342 43606 net.cpp:165] Memory required for data: 80588640
I0618 16:07:32.418347 43606 layer_factory.hpp:77] Creating layer conv_7/expand
I0618 16:07:32.418360 43606 net.cpp:100] Creating Layer conv_7/expand
I0618 16:07:32.418365 43606 net.cpp:434] conv_7/expand <- conv_6/project_conv_6/project_0_split_0
I0618 16:07:32.418373 43606 net.cpp:408] conv_7/expand -> conv_7/expand
I0618 16:07:32.419476 43606 net.cpp:150] Setting up conv_7/expand
I0618 16:07:32.419487 43606 net.cpp:157] Top shape: 1 384 19 19 (138624)
I0618 16:07:32.419489 43606 net.cpp:165] Memory required for data: 81143136
I0618 16:07:32.419495 43606 layer_factory.hpp:77] Creating layer conv_7/expand/relu
I0618 16:07:32.419500 43606 net.cpp:100] Creating Layer conv_7/expand/relu
I0618 16:07:32.419504 43606 net.cpp:434] conv_7/expand/relu <- conv_7/expand
I0618 16:07:32.419509 43606 net.cpp:395] conv_7/expand/relu -> conv_7/expand (in-place)
I0618 16:07:32.419693 43606 net.cpp:150] Setting up conv_7/expand/relu
I0618 16:07:32.419699 43606 net.cpp:157] Top shape: 1 384 19 19 (138624)
I0618 16:07:32.419703 43606 net.cpp:165] Memory required for data: 81697632
I0618 16:07:32.419708 43606 layer_factory.hpp:77] Creating layer conv_7/depthwise
I0618 16:07:32.419720 43606 net.cpp:100] Creating Layer conv_7/depthwise
I0618 16:07:32.419725 43606 net.cpp:434] conv_7/depthwise <- conv_7/expand
I0618 16:07:32.419734 43606 net.cpp:408] conv_7/depthwise -> conv_7/depthwise
I0618 16:07:32.744880 43606 net.cpp:150] Setting up conv_7/depthwise
I0618 16:07:32.744905 43606 net.cpp:157] Top shape: 1 384 19 19 (138624)
I0618 16:07:32.744909 43606 net.cpp:165] Memory required for data: 82252128
I0618 16:07:32.744918 43606 layer_factory.hpp:77] Creating layer conv_7/depthwise/relu
I0618 16:07:32.744928 43606 net.cpp:100] Creating Layer conv_7/depthwise/relu
I0618 16:07:32.744933 43606 net.cpp:434] conv_7/depthwise/relu <- conv_7/depthwise
I0618 16:07:32.744938 43606 net.cpp:395] conv_7/depthwise/relu -> conv_7/depthwise (in-place)
I0618 16:07:32.745450 43606 net.cpp:150] Setting up conv_7/depthwise/relu
I0618 16:07:32.745462 43606 net.cpp:157] Top shape: 1 384 19 19 (138624)
I0618 16:07:32.745468 43606 net.cpp:165] Memory required for data: 82806624
I0618 16:07:32.745474 43606 layer_factory.hpp:77] Creating layer conv_7/project
I0618 16:07:32.745491 43606 net.cpp:100] Creating Layer conv_7/project
I0618 16:07:32.745498 43606 net.cpp:434] conv_7/project <- conv_7/depthwise
I0618 16:07:32.745507 43606 net.cpp:408] conv_7/project -> conv_7/project
I0618 16:07:32.746654 43606 net.cpp:150] Setting up conv_7/project
I0618 16:07:32.746666 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.746671 43606 net.cpp:165] Memory required for data: 82899040
I0618 16:07:32.746681 43606 layer_factory.hpp:77] Creating layer conv_7/sum
I0618 16:07:32.746690 43606 net.cpp:100] Creating Layer conv_7/sum
I0618 16:07:32.746697 43606 net.cpp:434] conv_7/sum <- conv_6/project_conv_6/project_0_split_1
I0618 16:07:32.746704 43606 net.cpp:434] conv_7/sum <- conv_7/project
I0618 16:07:32.746712 43606 net.cpp:408] conv_7/sum -> conv_7
I0618 16:07:32.746723 43606 net.cpp:150] Setting up conv_7/sum
I0618 16:07:32.746731 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.746736 43606 net.cpp:165] Memory required for data: 82991456
I0618 16:07:32.746739 43606 layer_factory.hpp:77] Creating layer conv_7_conv_7/sum_0_split
I0618 16:07:32.746747 43606 net.cpp:100] Creating Layer conv_7_conv_7/sum_0_split
I0618 16:07:32.746753 43606 net.cpp:434] conv_7_conv_7/sum_0_split <- conv_7
I0618 16:07:32.746760 43606 net.cpp:408] conv_7_conv_7/sum_0_split -> conv_7_conv_7/sum_0_split_0
I0618 16:07:32.746768 43606 net.cpp:408] conv_7_conv_7/sum_0_split -> conv_7_conv_7/sum_0_split_1
I0618 16:07:32.746778 43606 net.cpp:150] Setting up conv_7_conv_7/sum_0_split
I0618 16:07:32.746783 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.746790 43606 net.cpp:157] Top shape: 1 64 19 19 (23104)
I0618 16:07:32.746795 43606 net.cpp:165] Memory required for data: 83176288
I0618 16:07:32.746800 43606 layer_factory.hpp:77] Creating layer conv_8/expand
I0618 16:07:32.746810 43606 net.cpp:100] Creating Layer conv_8/expand
I0618 16:07:32.746817 43606 net.cpp:434] conv_8/expand <- conv_7_conv_7/sum_0_split_0
I0618 16:07:32.746826 43606 net.cpp:408] conv_8/expand -> conv_8/expand
I0618 16:07:32.747963 43606 net.cpp:150] Setting up conv_8/expand
I0618 16:07:32.747975 43606 net.cpp:157] Top shape: 1 384 19 19 (138624)
I0618 16:07:32.747977 43606 net.cpp:165] Memory required for data: 83730784
I0618 16:07:32.747982 43606 layer_factory.hpp:77] Creating layer conv_8/expand/relu
I0618 16:07:32.747989 43606 net.cpp:100] Creating Layer conv_8/expand/relu
I0618 16:07:32.747992 43606 net.cpp:434] conv_8/expand/relu <- conv_8/expand
I0618 16:07:32.747997 43606 net.cpp:395] conv_8/expand/relu -> conv_8/expand (in-place)
I0618 16:07:32.748183 43606 net.cpp:150] Setting up conv_8/expand/relu
I0618 16:07:32.748189 43606 net.cpp:157] Top shape: 1 384 19 19 (138624)
I0618 16:07:32.748193 43606 net.cpp:165] Memory required for data: 84285280
I0618 16:07:32.748198 43606 layer_factory.hpp:77] Creating layer conv_8/depthwise
I0618 16:07:32.748211 43606 net.cpp:100] Creating Layer conv_8/depthwise
I0618 16:07:32.748216 43606 net.cpp:434] conv_8/depthwise <- conv_8/expand
I0618 16:07:32.748226 43606 net.cpp:408] conv_8/depthwise -> conv_8/depthwise
F0618 16:07:32.816975 43606 cudnn_conv_layer.cpp:52] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
Aborted (core dumped)

The loss value didn't drop when it came down to about 7.X?

Hi, @chuanqi305 Thanks for your great job, I got some problems here. I used scrips to convert tf model to caffe model and got "deploy.caffemodel". After that, I used that weights to fine tune on my 2-class dataset. Network configurations are all as you provided in "ssdlite/voc", I changed layer names and output channels of "conf" layers. While training, learning rate is 0.0001 at the beginning, the loss start to drop util it was about 7. So weights didn't converge at all. I wondered what's wrong with
it? Eager for your reply~~~~

Questuion: model reshaping caffe model in dump_tensorflow_weights.py

Is there anybody able to explain why in dump_tensorflow_weights.py:
(1) line 67:
caffe_weights = data.transpose(3, 2, 0, 1)
(2) line 85-86:
new_weights[:, 0] = tmp[:, 1] * 0.5
new_weights[:, 1] = tmp[:, 0] * 0.5

In (1), I know tensorflow uses NHWC format and caffe uses NCHW format so that I cannot realize why transposes (3,2,0,1)

In(2), sincerely request a help to know why it has to make new caffe weight a half of old caffe weight if BoxPredictor_0/BoxEncodingPredictor/weights

Check failed: fd != -1 (-1 vs. -1) File not found

I1229 15:43:24.557075 11704 solver.cpp:75] Solver scaffolding done.
I1229 15:43:24.561830 11704 caffe.cpp:155] Finetuning from
F1229 15:43:24.561849 11704 io.cpp:63] Check failed: fd != -1 (-1 vs. -1) File not found:
*** Check failure stack trace: ***
@ 0x7fb1c18795cd google::LogMessage::Fail()
@ 0x7fb1c187b433 google::LogMessage::SendToLog()
@ 0x7fb1c187915b google::LogMessage::Flush()
@ 0x7fb1c187be1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fb1c20c7e4c caffe::ReadProtoFromBinaryFile()
@ 0x7fb1c20e2f26 caffe::ReadNetParamsFromBinaryFileOrDie()
@ 0x7fb1c213d03a caffe::Net<>::CopyTrainedLayersFromBinaryProto()
@ 0x7fb1c213d0de caffe::Net<>::CopyTrainedLayersFrom()
@ 0x40a869 CopyLayers()
@ 0x40bcc4 train()
@ 0x4077e8 main
@ 0x7fb1c000f830 __libc_start_main
@ 0x4080b9 _start
@ (nil) (unknown)
Aborted (core dumped)
when i run train.sh,have this error....
everyone know how to sovle it
plz....

Question about the prototxt

Hi, I am quite interested in your caffe implementation. But I have one question about your prototxt:

In the paper of MobilenetV2, it mentioned we introduce a mobile friendly variant of regular SSD. We replace all the regular convolutions with separable convolutions (depthwise followed by 1 × 1 projection) in SSD prediction layers.

However, I didn't see any seperable convolution in the prediction layers (In my understanding, prediction layers are the layers used for location and confidence prediction?).
And even after the last layer(1280 output) of MobileNetV2, you still use bottlenecks residuals blocks to reduce the dimension for SSDLite.

May I know the reason why you build the SSDlite like this?

Many Thanks.

Model inference error when run gen_model with flag --no-batchnorm

Thanks for this wonderful work first!
I've successfully convert tensorflow model Mobilenetv2_SSD with converter, but when I add --no-batchnorm flag when running gen_model.py, I found there were errors about mismatching of .bat dump from tensorflow graph(mobilenetv2_ssd_coco) and deploy.prototxt. I resolved it by add some choice when generate bias filler of deploy.prototxt, but when I run the demo there are all wrong output.

The gen_model.py I used is here:

gen_model.py.txt

I wonder if there were any trick when generate deploy.prototxt.
Waiting for response, thx!

Fails to compile with Movidius NCS

I followed the instructions to convert the weights from ssd_mobilenet_v2_coco_2018_03_29 to a Caffe model.

When I try to convert using:
mvNCCompile -w deploy.caffemodel -o graph -s 12 deploy.prototxt

I get the error:
[Error 17] Toolkit Error: Internal Error: Could not build graph. Missing link: conv_2

caffemodel and prototxt here:
https://ufile.io/sysdx

If Mobilenetv2 is not supported then where can I get a full 90 class COCO of the original Mobilenet v1?
I tried running the conversion tools against ssd_mobilenet_v1_coco_2017_11_17 but the layer names must not match because it doesn't dump any data.

Check failed: num_priors_ * num_classes_ == bottom[1]->channels()

I use the command './train.sh' to train my datasets. The issue is

F0117 21:34:50.957195 1386 multibox_loss_layer.cpp:141] Check failed: num_priors_ * num_classes_ == bottom[1]->channels() (44091 vs. 41925) Number of priors must match number of confidence predictions.
*** Check failure stack trace: ***
@ 0x7f11efa7d5cd google::LogMessage::Fail()
@ 0x7f11efa7f433 google::LogMessage::SendToLog()
@ 0x7f11efa7d15b google::LogMessage::Flush()
@ 0x7f11efa7fe1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f11f0323a02 caffe::MultiBoxLossLayer<>::Reshape()
@ 0x7f11f0388933 caffe::Net<>::Init()
@ 0x7f11f038a161 caffe::Net<>::Net()
@ 0x7f11f01be78a caffe::Solver<>::InitTrainNet()
@ 0x7f11f01bfa87 caffe::Solver<>::Init()
@ 0x7f11f01bfe2a caffe::Solver<>::Solver()
@ 0x7f11f03a88e9 caffe::Creator_RMSPropSolver<>()
@ 0x40afd9 train()
@ 0x4077e8 main
@ 0x7f11ee213830 __libc_start_main
@ 0x4080b9 _start
@ (nil) (unknown)
Aborted (core dumped)

num_classes = 23; Should I change the last layer mbox_loss? More details will be appreciated
ubuntu16.0+cuda9.0
TKS!

The mAP after fine-turning and the train tricks

Hi,
Thanks for you works firstly. I have some questions about this network.

First, the slice layers used for padding [ 0 0 1 1] will incerase the time of forward-backward from 60ms to 90ms on my machine. By the caffe time tool, the increased time is not mainly from the slice layers, but slice layers will cause other layers slower. do you have any experiences to show if this padding affects the mAP significantly? if not, it is not necessary to add additional layers.

In order to verify the ReLU6,I replace the ReLU with ReLU6 in mobilenetv1-ssd,ReLU6 don't improve the mAP at all, but reduced by about 1%. how about the function of ReLU6 in V2?

The mAP of the Mobilenet V2-SSDlite on VOC data can only reach about 20% without pre-trained model. how about your mAP afer fine-turning with transfered pre-model and without pre-model respectively? it seems that it is very hard to train this network. Could you please share some training skills?

Please add 'rb' in graph_create

Hi chuanqi,
I use python3.5 tensorflow. When I run dump_tensorflow_weights.py, tensorflow throw error about reading the frozen graph. The code :
with tf.gfile.FastGFile(graphpath, 'r') as graphfile:
should be:
with tf.gfile.FastGFile(graphpath, 'rb') as graphfile:

dump_tensorflow_weights.py with error

i run the following command :python3 dump_tensorflow_weights.py
and get this error:
Traceback (most recent call last):
File "dump_tensorflow_weights.py", line 95, in
tmp = caffe_weights.reshape(boxes, -1).copy()
TypeError: 'float' object cannot be interpreted as an integer

i fix it by change the code at line 71

if output_name.find('BoxEncodingPredictor') != -1:
boxes = caffe_weights.shape[0] / 4
elif output_name.find('ClassPredictor') != -1:
boxes = caffe_weights.shape[0] / 91

to

if output_name.find('BoxEncodingPredictor') != -1:
boxes = caffe_weights.shape[0] // 4
elif output_name.find('ClassPredictor') != -1:
boxes = caffe_weights.shape[0] // 91

and i wonder if it is ok or not ?

demo_caffe in c++ code instead of python

HI all,

I am trying to run the converted caffe model in c++ code. But cannot get the same results as in demo_caffe.py.

Anyone tried this? Thanks.

Following is my code, modified based on ssd_detect.cpp

// This is a demo code for using a SSD model to do detection.
// The code is modified from examples/cpp_classification/classification.cpp.
// Usage:
// ssd_detect [FLAGS] model_file weights_file list_file
//
// where model_file is the .prototxt file defining the network architecture, and
// weights_file is the .caffemodel file containing the network parameters, and
// list_file contains a list of image files with the format as follows:
// folder/img1.JPEG
// folder/img2.JPEG
// list_file can also contain a list of video files with the format as follows:
// folder/video1.mp4
// folder/video2.mp4
//
#include <caffe/caffe.hpp>
#ifdef USE_OPENCV
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
// using namespace cv;
#endif // USE_OPENCV
#include
#include
#include
#include
#include
#include
#include
#include
// using namespace std;

using namespace caffe; // NOLINT(build/namespaces)

class Detector {
public:
Detector(const string& model_file,
const string& weights_file);

std::vector<vector > Detect(const cv::Mat& img);

private:
void SetMean(const string& mean_file, const string& mean_value);

void WrapInputLayer(std::vectorcv::Mat* input_channels);

void Preprocess(const cv::Mat& img,
std::vectorcv::Mat* input_channels);

private:
shared_ptr<Net > net_;
cv::Size input_geometry_;
int num_channels_;
// cv::Mat mean_;
};

Detector::Detector(const string& model_file,
const string& weights_file) {
#ifdef CPU_ONLY
Caffe::set_mode(Caffe::CPU);
#else
Caffe::set_mode(Caffe::GPU);
#endif

/* Load the network. */
net_.reset(new Net(model_file, TEST));
net_->CopyTrainedLayersFrom(weights_file);

CHECK_EQ(net_->num_inputs(), 1) << "Network should have exactly one input.";
CHECK_EQ(net_->num_outputs(), 1) << "Network should have exactly one output.";

Blob* input_layer = net_->input_blobs()[0];
num_channels_ = input_layer->channels();
CHECK(num_channels_ == 3 || num_channels_ == 1)
<< "Input layer should have 1 or 3 channels.";
input_geometry_ = cv::Size(input_layer->width(), input_layer->height());

}

std::vector<vector > Detector::Detect(const cv::Mat& img) {
Blob* input_layer = net_->input_blobs()[0];
input_layer->Reshape(1, num_channels_,
input_geometry_.height, input_geometry_.width);
/* Forward dimension change to all layers. */
net_->Reshape();

std::vectorcv::Mat input_channels;
WrapInputLayer(&input_channels);

Preprocess(img, &input_channels);

net_->Forward();

/* Copy the output layer to a std::vector /
Blob
result_blob = net_->output_blobs()[0];
const float* result = result_blob->cpu_data();
const int num_det = result_blob->height();
vector<vector > detections;
for (int k = 0; k < num_det; ++k) {
if (result[0] == -1) {
// Skip invalid detection.
result += 7;
continue;
}
vector detection(result, result + 7);
detections.push_back(detection);
result += 7;
}
return detections;
}

/* Wrap the input layer of the network in separate cv::Mat objects

  • (one per channel). This way we save one memcpy operation and we
  • don't need to rely on cudaMemcpy2D. The last preprocessing
  • operation will write the separate channels directly to the input
  • layer. /
    void Detector::WrapInputLayer(std::vectorcv::Mat
    input_channels) {
    Blob* input_layer = net_->input_blobs()[0];

int width = input_layer->width();
int height = input_layer->height();
float* input_data = input_layer->mutable_cpu_data();
for (int i = 0; i < input_layer->channels(); ++i) {
cv::Mat channel(height, width, CV_32FC1, input_data);
input_channels->push_back(channel);
input_data += width * height;
}
}

void Detector::Preprocess(const cv::Mat& img,
std::vectorcv::Mat* input_channels) {
/* Convert the input image to the input image format of the network. */
cv::Mat sample;
if (img.channels() == 3 && num_channels_ == 1)
cv::cvtColor(img, sample, cv::COLOR_BGR2GRAY);
else if (img.channels() == 4 && num_channels_ == 1)
cv::cvtColor(img, sample, cv::COLOR_BGRA2GRAY);
else if (img.channels() == 4 && num_channels_ == 3)
cv::cvtColor(img, sample, cv::COLOR_BGRA2BGR);
else if (img.channels() == 1 && num_channels_ == 3)
cv::cvtColor(img, sample, cv::COLOR_GRAY2BGR);
else
sample = img;

cv::Mat sample_resized;
if (sample.size() != input_geometry_)
cv::resize(sample, sample_resized, input_geometry_);
else
sample_resized = sample;

// cv::cvtColor(sample_resized, sample_resized, CV_BGR2RGB);

cv::Mat sample_float;
if (num_channels_ == 3)
sample_resized.convertTo(sample_float, CV_32FC3);
else
sample_resized.convertTo(sample_float, CV_32FC1);

// cv::Mat sample_normalized;
// cv::subtract(sample_float, mean_, sample_normalized);
//
// normalize image

float *im_data = (float*)sample_float.data;
for(int i = 0; i < num_channels_  * sample_float.rows * sample_float.cols; i ++){
    im_data[i] -= 127.5;
    im_data[i] /= 127.5;
}

/* This operation will write the separate BGR planes directly to the

  • input layer of the network because it is wrapped by the cv::Mat
  • objects in input_channels. */
    cv::split(sample_float, *input_channels);

CHECK(reinterpret_cast<float*>(input_channels->at(0).data)
== net_->input_blobs()[0]->cpu_data())
<< "Input channels are not wrapping the input layer of the network.";
}

DEFINE_string(mean_file, "",
"The mean file used to subtract from the input image.");
DEFINE_string(mean_value, "127.5", // 104,117,123
"If specified, can be one value or can be same as image channels"
" - would subtract from the corresponding channel). Separated by ','."
"Either mean_file or mean_value should be provided, not both.");
DEFINE_string(file_type, "image",
"The file type in the list_file. Currently support image and video.");
DEFINE_string(out_file, "",
"If provided, store the detection results in the out_file.");
DEFINE_double(confidence_threshold, 0.01,
"Only store detections with score higher than the threshold.");

static std::string SplitFilename (const std::string& str)
{
size_t found;
// std::cout << "Splitting: " << str << std::endl;
found=str.find_last_of ( "/\" );
// string rawname = fullname.substr(0, lastindex);
// std::cout << " folder: " << str.substr(0,found) << std::endl;
// std::cout << " file: " << str.substr(found+1) << std::endl;
std::string fname = str.substr ( found+1);
size_t lastindex = fname.find_last_of ( "." );
return fname.substr(0, lastindex);
}

static std::string gen_name(std::string dir, string fname){
return dir + "/" + fname + "_mnet2_cf.jpg";
}

int main(int argc, char** argv)
{
const string& model_file = argv[1];
const string& weights_file = argv[2];
const string& in_file = argv[3];

string fname = SplitFilename(in_file);    
const string& out_dir = argv[4];
string out_file = gen_name(out_dir, fname); 
const float confidence_threshold = FLAGS_confidence_threshold;

// Initialize the network.
Detector detector ( model_file, weights_file );

cv::Mat img = cv::imread ( in_file, -1 );
CHECK ( !img.empty() ) << "Unable to decode image " << in_file;
std::vector<vector<float> > detections = detector.Detect ( img );

/* Print the detection results. */
for ( int i = 0; i < detections.size(); ++i ) {
    const vector<float>& d = detections[i];
    // Detection format: [image_id, label, score, xmin, ymin, xmax, ymax].
    CHECK_EQ ( d.size(), 7 );
    const float score = d[2];
    if ( score >= confidence_threshold ) {
        std::cout << in_file << " ";
        std::cout << static_cast<int> ( d[1] ) << " ";
        std::cout << score << " ";
        std::cout << static_cast<int> ( d[3] * img.cols ) << " ";
        std::cout << static_cast<int> ( d[4] * img.rows ) << " ";
        std::cout << static_cast<int> ( d[5] * img.cols ) << " ";
        std::cout << static_cast<int> ( d[6] * img.rows ) << std::endl;

        int x = int ( d[3] * img.cols );
        int y = int ( d[4] * img.rows );
        int w = int ( d[5] * img.cols ) - x + 1;
        int h = int ( d[6] * img.cols ) - y + 1;

        cv::Rect rt ( x, y, w, h );
        cv::Scalar clr = cv::Scalar ( 0, 255, 0 );
        cv::rectangle ( img, rt, clr );
        cv::putText ( img, std::to_string ( d[1] ), cv::Point ( x-10,y-10 ), cv::FONT_HERSHEY_SIMPLEX, 1, clr, 2 );
    }
}
imwrite (out_file,img);
return 0;

}


the loss of test.prototxt?

Hello! i'm quite interested in your project, and i'd plan to train the model with VOC datasets.

But i wonder where is the test.prototxt? Is it loss? Or may i rewrite the prototxt like the train.prototxt? Now i wrote a test.prototxt and started to train model with it. But the output shows it maintains highly loss and hard to fall.

Also i test the deploy.caffemodel with demo_caffe_voc.py. But I got strange result(the boxs have shifted). What's wrong with it?

Many Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.