Giter Site home page Giter Site logo

yeephycho / tensorflow-face-detection Goto Github PK

View Code? Open in Web Editor NEW
760.0 30.0 265.0 20.31 MB

A mobilenet SSD based face detector, powered by tensorflow object detection api, trained by WIDERFACE dataset.

License: Apache License 2.0

Python 100.00%
face detection tensorflow ssd mobilenet object-detection widerface

tensorflow-face-detection's People

Contributors

0xekez avatar jeromebruzaud avatar katsunoriwa avatar xiangyann avatar yeephycho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-face-detection's Issues

Config needed

Hi,
I am training the same model on WIDERface with the coco default config.
but running eval.py is giving warning for

no ground truth in the following classes [1,2]

i am using the label map provided by your repo. even went through some of the issues of this problem, still i am getting the same error. Can you post config you used so i can decode this problem?

Why frame_num = 1490 ?

Hi, I was going through your source code but couldn't understand why you have used frame_num = 1490 and run through only these many frames in inference_video_face.py file? Can you please clarify it?

tensorflow model

Hi, can you please share your tensorflow model? Your model is quite unique that it can accept non 1:1 aspect ratio images. Mobilenet can only take 1:1 ratio model. Wondering how you designed your model and trained it. Really appreciate if you can share!

怎么重新训练模型

您好 请问这个实验中提供的模型可以重新训练吗?或者有训练的代码来自己训练吗?

2 classes defined instead of just one

I would like to ask for the 2 classes that you have defined. Is there a specific reason for that?

I have tried to train a one-class with persons and it seemed to me that sometimes multiple classes gives a precision boost for some reason. Is this the case here? Because as far as I can see you don't actually uses the background class at all. Are there any background class member that the model could detect for example?

Question about label map

Hello,

In the label map file, there is a background class included. I'm wondering if you found any difference with or without this? I.e. will the training work ok if there's only one face class defined?

Thanks,
Kevin

Error when using python-opencv from apt on Ubuntu 16.04.4

This post is for informative purpose as I've encountered the problem earlier on(and solved it).
The cv2(opencv) module is available on Ubuntu 16.04.4 as 2 possible ways(pip3 not tested)
python-opencv from apt
opencv-python from pip

The pip version should be used instead of the apt version as the latter will just generate a ~5.6k avi file that includes nothing, probably related to Issue 5

Problem with GPU memory usage

Hi!
I have a problem with GPU memory usage when running:
python inference_video_face.py without any changes

I'm using tensorflow 1.10.0

And currently, it shows 6607 MB of GPU memory usage per single thread
screenshot from 2018-11-12 16-59-26

If I limit memory usage with
config.gpu_options.per_process_gpu_memory_fraction = 0.02
it starts to use only 559 MB of GPU memory.
screenshot from 2018-11-12 17-19-32

But when running it firstly shows a lot of warnings:

2018-11-12 17:21:11.179219: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.02GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-11-12 17:21:11.180674: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
...

But runs okay and with the same performance

Do you know what can be a reasons of such memory consumption and how to limit it when running with multiple other models in more production way then just with per_process_gpu_memory_fraction?

About the results?

Could you report the final results on FDDB or other face detection database? What about the final loss? Thank you very much!

Pipeline config file

Hi! Could you provide pipeline config file that was used for training, please? It is possible to run this model directly in opencv, but anchors and other parameters should be known to convert the model into opencv format.

Convert to tflite

I try to convert .pb file to .tflite file so that I can deploy the model to android.

But when I try to use tflite_convert to do the convert, it failed. Then, I follow the instruction here which indicates that ckpt file is needed to generate .pb and then .tflite(But in this repo only pb file was offered).

I also try to use the code here to do the 'pb to ckpt' transform, but when I try to use the generated ckpt file to genrate .pb file, I failed again.

So can you offer the ckpt file, or can you indicate how to do the wider face dataset clean to train the model and get the ckpt file.

Thanks.

./inference_usbCam_face.py 0 issue

Hi, when i run ./inference_usbCam_face.py 0, there is a issue as below:
File "./inference_usbCam_face.py", line 108, in <module> (boxes, scores, classes, num_detections) = tDetector.run(image) File "./inference_usbCam_face.py", line 73, in run feed_dict={image_tensor: image_np_expanded}) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1058, in _run raise RuntimeError('Attempted to use a closed Session.') RuntimeError: Attempted to use a closed Session.
Could you help me to check the issue?

No Bounding Boxes

@yeephycho
I trained the entire model again using legacy/train.py in object detection tf API, further when i exported the frozen model to .pb , while testing on your inference_video.py/usb one, I can't find any bounding boxes in it.
-> size of the images are all different
-> dataset is of our own
-> converted to XML n further to tfrecords
NO BOUNDING BOXES help me out!!!
I have trained it over mobilenetV2ssd
here is my pipeline below

pipeline.txt

Multiple cameras

Hi! Can you teach me/show me the code on how I can run this using multiple video sources using multiple cv2.VideoCapture() commands?

Bad performance using mobilenet

Hi,
Recently I also trained a mobilenet based object detection model, but the performance is much worse than yours. Could you tell me the training details when you trained your model. Thanks

Batch mode

Is it possible to feed a batch of images to the model to improve time performance?

Error: tensorflow.python.framework.errors_impl.NotFoundError: No attr named 'index_type' in NodeDef:

While using the frozen_inference_graph_face.pb to convert into tensorRT optimized graph, I am running into this issue

Error

2018-05-04 10:20:52.305790: I tensorflow/core/grappler/devices.cc:51] Number of eligible GPUs (core count >= 8): 1
Traceback (most recent call last):
File "test_TF_TensorRT_Prediction_GPU.py", line 189, in
precision_mode="FP32") # Get optimized graph
File "/local/Anaconda3/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/python/trt_convert.py", line 115, in create_inference_graph
int(msg[0]))
tensorflow.python.framework.errors_impl.NotFoundError: No attr named 'index_type' in NodeDef:
[[Node: MultipleGridAnchorGenerator/Meshgrid_17/ExpandedShape_1/ones = Fill[T=DT_INT32](MultipleGridAnchorGenerator/Meshgrid_17/ExpandedShape_1/Reshape, MultipleGridAnchorGenerator/Meshgrid_17/ExpandedShape_1/ones/Const)]]
[[Node: MultipleGridAnchorGenerator/Meshgrid_17/ExpandedShape_1/ones = Fill[T=DT_INT32](MultipleGridAnchorGenerator/Meshgrid_17/ExpandedShape_1/Reshape, MultipleGridAnchorGenerator/Meshgrid_17/ExpandedShape_1/ones/Const)]]
(tensorflow_gpu_r1.7) [rbakkann@rbakkann-deep-deb8-64:/local/del/tftrt]

tensorflow-object-detection-api face co-ordinates

boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0') gives me normalized co-ordinates(0.1734, 08956, 0.34567, 012564) of the face detected, but I want un-normlized(original) co-ordinates (245, 475, 300, 600)

How did you modified the anchor settings?

SSD anchor settings by default is:

      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }

Could you tell me how did you modify them to fit the face detection?

Learning rate details

Can you please share following 3 parameter values for training.

initial_learning_rate
decay_steps
decay_factor

Can't (?) show frames

The output video for me is blank.

If I do cv2.imshow("face", image) it won't show any windows.
If i do cv2.imwrite("frames/%s.jpg" % time.time(), image) i can see that frames are being processed, inference time cost is ~0.42.

Tested on Ubuntu 16 machine with no GPU.
Is GPU a requirement?

fine tuning for hyperparameters

could you please provide the hyper parameters did you set for training (like batch-size,epoch,learningrate, momentum etc) ? data augmentation ?

thanks in advance

Disable console output

Hi there, is is possible to disable the console output when running the detection?

...
Optimizing fused batch norm node name: "FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/FusedBatchNorm"
op: "FusedBatchNorm"
input: "FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/convolution"
input: "FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/gamma"
input: "FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/beta"
input: "FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/moving_mean"
input: "FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/moving_variance"
device: "/job:localhost/replica:0/task:0/device:CPU:0"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "data_format"
  value {
    s: "NHWC"
  }
}
attr {
  key: "epsilon"
  value {
    f: 0.001
  }
}
attr {
  key: "is_training"
  value {
    b: false
  }
}
...

I was able to disable tensorflow logging messages, however I'm unable to disable the above output.
It doesn't work even if I disable entire stdout and stderr.

Any suggestions?

Checkpoint file

Can you provide the checkpoint folder (including meta file)? It is common now in tensorflow to import meta graphs.

Error occur while converting graphdef to .tflite. ConverterError: TOCO failed. See console for info.

I am try to convert the graphDef model to tflite using tf.lite.TFLiteConverter.from_frozen_graph()
checked every steps as mentioned in the document.

`ConverterError Traceback (most recent call last)
in
9 converter = tf.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_arrays, input_shape)
10 #converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS]
---> 11 tflite_model = converter.convert()
12 open("E:/training_models/tensorflow_face_detection/model/detect_face.tflite", "wb").write(tflite_model)

~\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\lite\python\lite.py in convert(self)
453 input_tensors=self._input_tensors,
454 output_tensors=self._output_tensors,
--> 455 **converter_kwargs)
456 else:
457 result = _toco_convert_graph_def(

~\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\lite\python\convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, *args, **kwargs)
440 data = toco_convert_protos(model_flags.SerializeToString(),
441 toco_flags.SerializeToString(),
--> 442 input_data.SerializeToString())
443 return data
444

~\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\lite\python\convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str)
203 stderr = _try_convert_to_unicode(stderr)
204 raise ConverterError(
--> 205 "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
206 finally:
207 # Must manually cleanup files.`

Below is the detail ConverterError Log:
ConverterError: TOCO failed. See console for info. 2019-03-29 15:04:07.619011: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.619546: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant 2019-03-29 15:04:07.619822: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant 2019-03-29 15:04:07.620164: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant 2019-03-29 15:04:07.620413: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant 2019-03-29 15:04:07.620818: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.621030: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayScatterV3 2019-03-29 15:04:07.621242: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayScatterV3 2019-03-29 15:04:07.621439: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.621643: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.621821: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.621980: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.622124: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.622431: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.622693: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.622951: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.623237: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: LoopCond 2019-03-29 15:04:07.623506: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: LoopCond 2019-03-29 15:04:07.623810: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.624086: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.624364: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.624635: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.624921: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayReadV3 2019-03-29 15:04:07.625226: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayReadV3 2019-03-29 15:04:07.625557: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.625834: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.626113: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayWriteV3 2019-03-29 15:04:07.626416: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayWriteV3 2019-03-29 15:04:07.626709: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Exit 2019-03-29 15:04:07.626974: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Exit 2019-03-29 15:04:07.627246: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArraySizeV3 2019-03-29 15:04:07.627534: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArraySizeV3 2019-03-29 15:04:07.627767: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayGatherV3 2019-03-29 15:04:07.628010: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayGatherV3 2019-03-29 15:04:07.655854: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.656194: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.656419: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.656584: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.656792: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.657003: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.657222: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayScatterV3 2019-03-29 15:04:07.657419: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayScatterV3 2019-03-29 15:04:07.657637: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayScatterV3 2019-03-29 15:04:07.657834: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayScatterV3 2019-03-29 15:04:07.658037: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayScatterV3 2019-03-29 15:04:07.658200: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayScatterV3 2019-03-29 15:04:07.658401: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.658657: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.658953: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.659242: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.659429: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.659616: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.659818: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayV3 2019-03-29 15:04:07.660003: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayV3 2019-03-29 15:04:07.660208: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.660408: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.660574: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.660748: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.660924: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.661116: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.661290: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.661484: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.661666: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.661830: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.662024: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.662207: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.662376: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: LoopCond 2019-03-29 15:04:07.662560: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: LoopCond 2019-03-29 15:04:07.662757: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.662926: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.663089: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.663260: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.663444: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayReadV3 2019-03-29 15:04:07.663614: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayReadV3 2019-03-29 15:04:07.663804: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.664024: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.664298: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.664561: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.664845: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayReadV3 2019-03-29 15:04:07.665148: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayReadV3 2019-03-29 15:04:07.665444: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.665711: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.665988: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.666254: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.666525: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayReadV3 2019-03-29 15:04:07.666812: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayReadV3 2019-03-29 15:04:07.667245: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Where 2019-03-29 15:04:07.667517: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Where 2019-03-29 15:04:07.667823: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.668095: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.668382: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Where 2019-03-29 15:04:07.668655: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Where 2019-03-29 15:04:07.668981: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: NonMaxSuppression 2019-03-29 15:04:07.669255: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: NonMaxSuppression 2019-03-29 15:04:07.669547: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Where 2019-03-29 15:04:07.669712: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Where 2019-03-29 15:04:07.669987: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Where 2019-03-29 15:04:07.670221: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Where 2019-03-29 15:04:07.670478: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: NonMaxSuppression 2019-03-29 15:04:07.670756: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: NonMaxSuppression 2019-03-29 15:04:07.671031: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Size 2019-03-29 15:04:07.671248: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Size 2019-03-29 15:04:07.672109: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.672288: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.672471: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayWriteV3 2019-03-29 15:04:07.672678: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayWriteV3 2019-03-29 15:04:07.672832: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.673133: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.673332: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayWriteV3 2019-03-29 15:04:07.673552: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayWriteV3 2019-03-29 15:04:07.673795: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.673988: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.674160: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayWriteV3 2019-03-29 15:04:07.674376: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayWriteV3 2019-03-29 15:04:07.674649: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Enter 2019-03-29 15:04:07.674840: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Enter 2019-03-29 15:04:07.675038: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayWriteV3 2019-03-29 15:04:07.675345: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayWriteV3 2019-03-29 15:04:07.675728: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Exit 2019-03-29 15:04:07.676027: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Exit 2019-03-29 15:04:07.676381: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Exit 2019-03-29 15:04:07.676575: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Exit 2019-03-29 15:04:07.676756: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Exit 2019-03-29 15:04:07.676946: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Exit 2019-03-29 15:04:07.677100: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Exit 2019-03-29 15:04:07.677284: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Exit 2019-03-29 15:04:07.677453: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArraySizeV3 2019-03-29 15:04:07.677657: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArraySizeV3 2019-03-29 15:04:07.677871: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayGatherV3 2019-03-29 15:04:07.678061: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayGatherV3 2019-03-29 15:04:07.678274: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArraySizeV3 2019-03-29 15:04:07.678468: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArraySizeV3 2019-03-29 15:04:07.678671: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayGatherV3 2019-03-29 15:04:07.678837: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayGatherV3 2019-03-29 15:04:07.679068: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArraySizeV3 2019-03-29 15:04:07.679302: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArraySizeV3 2019-03-29 15:04:07.679614: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayGatherV3 2019-03-29 15:04:07.679907: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayGatherV3 2019-03-29 15:04:07.680209: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArraySizeV3 2019-03-29 15:04:07.680499: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArraySizeV3 2019-03-29 15:04:07.680816: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: TensorArrayGatherV3 2019-03-29 15:04:07.681064: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: TensorArrayGatherV3 2019-03-29 15:04:07.757845: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1461 operators, 2604 arrays (0 quantized) 2019-03-29 15:04:07.881035: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 1402 operators, 2490 arrays (0 quantized) 2019-03-29 15:04:08.033914: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1402 operators, 2490 arrays (0 quantized) 2019-03-29 15:04:08.127816: F tensorflow/lite/toco/graph_transformations/resolve_constant_slice.cc:59] Check failed: dim_size >= 1 (0 vs. 1)

This operations are not supported by the tflite. Can anyone help me to solve this issue.

Can this model be used for transfer learning

Can i use this model for transfer learning, to add person as a new class? This model is really working pretty good. I need a tensorflow model which can detect "person" as well as "face".

Need for resizing before evaluating

Hello,
Is the model performing some resizing in the tensorflow back-end ?
I get significantly better result when i crop the outside of an image to make it square. So could it be that the model is resizing images to make them square and by doing so squishing faces ?
Thanks

No output media

Hey, I followed all the instructions but no output media is being created.

Output:
python inference_video_face.py 2017-12-29 15:52:24.196885: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-12-29 15:52:24.196915: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-12-29 15:52:24.196923: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. inference time cost: 0.417937040329 inference time cost: 0.208209991455 inference time cost: 0.2039539814 inference time cost: 0.204596042633 inference time cost: 0.205924034119 inference time cost: 0.203047990799 inference time cost: 0.202235937119 inference time cost: 0.217959880829 inference time cost: 0.224509954453 inference time cost: 0.210134983063 inference time cost: 0.201905965805 inference time cost: 0.207165956497 inference time cost: 0.202527046204 inference time cost: 0.200993061066 inference time cost: 0.20183300972 inference time cost: 0.202362060547 inference time cost: 0.201920986176 inference time cost: 0.198541164398 inference time cost: 0.209784030914 inference time cost: 0.202893972397 inference time cost: 0.196293830872 inference time cost: 0.201288938522 inference time cost: 0.199282884598 inference time cost: 0.206258058548 inference time cost: 0.20901799202 inference time cost: 0.201825141907 inference time cost: 0.205009222031 inference time cost: 0.202190160751 inference time cost: 0.203970909119 inference time cost: 0.205040216446 inference time cost: 0.208302974701 inference time cost: 0.210053920746 inference time cost: 0.20711684227 inference time cost: 0.211798906326 inference time cost: 0.212251901627 inference time cost: 0.21053814888 inference time cost: 0.210524082184 inference time cost: 0.208800077438 inference time cost: 0.206100940704 inference time cost: 0.214071989059 inference time cost: 0.221938848495 inference time cost: 0.216559886932 inference time cost: 0.221755981445 inference time cost: 0.231920003891 inference time cost: 0.21414899826 inference time cost: 0.213707923889 inference time cost: 0.227074861526 inference time cost: 0.223334789276 inference time cost: 0.242343902588 inference time cost: 0.242468833923 inference time cost: 0.222752094269 inference time cost: 0.216411828995 inference time cost: 0.220861911774 inference time cost: 0.227092027664 inference time cost: 0.220318078995 inference time cost: 0.23770403862 inference time cost: 0.229876041412 inference time cost: 0.219667196274 inference time cost: 0.218620061874 inference time cost: 0.225208044052 inference time cost: 0.213758945465 inference time cost: 0.217224836349 inference time cost: 0.215490102768 inference time cost: 0.214308023453 inference time cost: 0.216567993164 inference time cost: 0.217869997025 inference time cost: 0.21649813652 inference time cost: 0.222114086151 inference time cost: 0.217518091202 inference time cost: 0.209808111191 inference time cost: 0.205770969391 inference time cost: 0.210033178329 inference time cost: 0.21347784996 inference time cost: 0.238358974457 inference time cost: 0.242347002029 inference time cost: 0.264683008194 inference time cost: 0.24779510498 inference time cost: 0.248744010925 inference time cost: 0.250416040421 inference time cost: 0.264840841293 inference time cost: 0.244341135025 inference time cost: 0.246070861816 inference time cost: 0.210253000259 inference time cost: 0.209403038025 inference time cost: 0.211063861847 inference time cost: 0.216904878616 inference time cost: 0.223391056061 inference time cost: 0.235756874084 inference time cost: 0.210615873337 inference time cost: 0.205776929855 inference time cost: 0.221609115601 inference time cost: 0.223328113556 inference time cost: 0.216742992401 inference time cost: 0.221280097961 inference time cost: 0.211334943771 inference time cost: 0.203860998154 inference time cost: 0.203227996826 inference time cost: 0.212304830551 inference time cost: 0.198536872864 inference time cost: 0.202285051346 inference time cost: 0.198318958282 inference time cost: 0.198724985123 inference time cost: 0.210919857025 inference time cost: 0.206523895264 inference time cost: 0.200446128845 inference time cost: 0.209186077118 inference time cost: 0.200613975525 inference time cost: 0.208227872849 inference time cost: 0.20864200592 inference time cost: 0.207768917084 inference time cost: 0.207153081894 inference time cost: 0.21022605896 inference time cost: 0.210891008377 inference time cost: 0.207917928696 inference time cost: 0.202557086945 inference time cost: 0.209172010422 inference time cost: 0.205944061279 inference time cost: 0.201563835144 inference time cost: 0.208555936813 inference time cost: 0.20205283165 inference time cost: 0.205033063889 inference time cost: 0.204127073288 inference time cost: 0.207630157471 inference time cost: 0.20238494873 inference time cost: 0.208182096481 inference time cost: 0.20651102066 inference time cost: 0.211131095886 inference time cost: 0.230016946793 inference time cost: 0.23125910759 inference time cost: 0.210608005524 inference time cost: 0.226739883423 inference time cost: 0.211547851562 inference time cost: 0.215004920959 inference time cost: 0.212172985077 inference time cost: 0.218910932541 inference time cost: 0.222481966019 inference time cost: 0.22424697876 inference time cost: 0.20408987999 inference time cost: 0.203797101974 inference time cost: 0.210248947144 inference time cost: 0.206460952759 inference time cost: 0.205123901367 inference time cost: 0.216490030289 inference time cost: 0.209151983261 inference time cost: 0.205590963364 inference time cost: 0.204297065735 inference time cost: 0.206486940384 inference time cost: 0.210094928741 inference time cost: 0.249784946442 inference time cost: 0.243538856506 inference time cost: 0.25429391861 inference time cost: 0.252223968506 inference time cost: 0.245093822479 inference time cost: 0.245849847794 inference time cost: 0.24328494072 inference time cost: 0.244653940201 inference time cost: 0.233381032944 inference time cost: 0.227144956589 inference time cost: 0.224055051804 inference time cost: 0.213668107986 inference time cost: 0.227074861526 inference time cost: 0.229698181152 inference time cost: 0.219811201096 inference time cost: 0.227994203568 inference time cost: 0.229337930679 inference time cost: 0.217545986176 inference time cost: 0.212108850479 inference time cost: 0.215633153915 inference time cost: 0.213683128357 inference time cost: 0.215692996979 inference time cost: 0.212095975876 inference time cost: 0.208844184875 inference time cost: 0.225620985031 inference time cost: 0.21475982666 inference time cost: 0.207314014435 inference time cost: 0.201117038727 inference time cost: 0.2066218853 inference time cost: 0.214157819748 inference time cost: 0.21735906601 inference time cost: 0.239096879959 inference time cost: 0.219373941422 inference time cost: 0.23636007309 inference time cost: 0.217071056366 inference time cost: 0.214187145233 inference time cost: 0.211740016937 inference time cost: 0.218811988831 inference time cost: 0.217324972153 inference time cost: 0.22531414032 inference time cost: 0.280341863632 inference time cost: 0.279320001602 inference time cost: 0.279906988144 inference time cost: 0.274515151978 inference time cost: 0.278496026993 inference time cost: 0.274595975876 inference time cost: 0.279932022095 inference time cost: 0.282446146011 inference time cost: 0.274383068085 inference time cost: 0.211535930634 inference time cost: 0.209697008133 inference time cost: 0.211524963379 inference time cost: 0.213562011719 inference time cost: 0.222071170807 inference time cost: 0.215301990509 inference time cost: 0.214210033417 inference time cost: 0.216985940933 inference time cost: 0.220325946808 inference time cost: 0.220213890076 inference time cost: 0.221009016037 inference time cost: 0.218371868134 inference time cost: 0.218873977661 inference time cost: 0.218907833099 inference time cost: 0.218776941299 inference time cost: 0.220093011856 inference time cost: 0.219805002213 inference time cost: 0.219503879547 inference time cost: 0.208733081818 inference time cost: 0.202378034592 inference time cost: 0.20908498764 inference time cost: 0.20393204689 inference time cost: 0.205993890762 inference time cost: 0.203629016876 inference time cost: 0.206456899643 inference time cost: 0.209307909012 inference time cost: 0.205091953278 inference time cost: 0.20378613472 inference time cost: 0.205709934235 inference time cost: 0.206635951996 inference time cost: 0.207864999771 inference time cost: 0.205020189285 inference time cost: 0.211507081985 inference time cost: 0.208268165588 inference time cost: 0.205940008163 inference time cost: 0.208065986633 inference time cost: 0.209594964981 inference time cost: 0.210551023483 inference time cost: 0.211791992188 inference time cost: 0.210172176361 inference time cost: 0.208184957504 inference time cost: 0.207793951035 inference time cost: 0.208848953247 inference time cost: 0.204702138901 inference time cost: 0.208296060562 inference time cost: 0.209919929504 inference time cost: 0.208660125732 inference time cost: 0.210602998734 inference time cost: 0.21485209465 inference time cost: 0.213048934937 inference time cost: 0.218766927719 inference time cost: 0.233155965805 inference time cost: 0.306179046631 inference time cost: 0.242048978806 inference time cost: 0.253768920898 inference time cost: 0.248286962509 inference time cost: 0.271131038666 inference time cost: 0.270462989807 inference time cost: 0.25683093071 inference time cost: 0.272734880447 inference time cost: 0.284898042679 inference time cost: 0.254412174225 inference time cost: 0.270535945892 inference time cost: 0.284423112869 inference time cost: 0.267477035522 inference time cost: 0.260974884033 inference time cost: 0.259536027908 inference time cost: 0.263549089432 inference time cost: 0.262032032013 inference time cost: 0.260672092438 inference time cost: 0.266345024109 inference time cost: 0.257710933685 inference time cost: 0.26468706131 inference time cost: 0.262835025787 inference time cost: 0.26095199585 inference time cost: 0.261209964752 inference time cost: 0.258512973785 inference time cost: 0.261729955673 inference time cost: 0.262486934662 inference time cost: 0.262018203735 inference time cost: 0.269473075867 inference time cost: 0.27542591095 inference time cost: 0.269252061844 inference time cost: 0.271088838577 inference time cost: 0.280863046646 inference time cost: 0.269818067551 inference time cost: 0.277490854263 inference time cost: 0.269759893417 inference time cost: 0.278939008713 inference time cost: 0.277776956558 inference time cost: 0.2674908638 inference time cost: 0.277341127396 inference time cost: 0.288149118423 inference time cost: 0.272877931595 inference time cost: 0.274698972702 inference time cost: 0.273131132126 inference time cost: 0.271548986435 inference time cost: 0.27236199379 inference time cost: 0.270072937012 inference time cost: 0.274460077286 inference time cost: 0.269582986832 inference time cost: 0.276633024216 inference time cost: 0.277897119522 inference time cost: 0.277441978455 inference time cost: 0.278612852097 inference time cost: 0.277055025101 inference time cost: 0.27055811882 inference time cost: 0.277670860291 inference time cost: 0.269835948944 inference time cost: 0.281900882721 inference time cost: 0.275520086288 inference time cost: 0.272112846375 inference time cost: 0.271873950958 inference time cost: 0.275208950043 inference time cost: 0.270714998245 inference time cost: 0.249088048935 inference time cost: 0.244168043137 inference time cost: 0.24885392189 inference time cost: 0.241423845291 inference time cost: 0.248044967651 inference time cost: 0.254505872726 inference time cost: 0.244158983231 inference time cost: 0.251082897186 inference time cost: 0.241224050522 inference time cost: 0.242599010468 inference time cost: 0.24365401268 inference time cost: 0.247319936752 inference time cost: 0.247543096542 inference time cost: 0.248133897781 inference time cost: 0.240698814392 inference time cost: 0.241225004196 inference time cost: 0.248049020767 inference time cost: 0.248456954956 inference time cost: 0.24573802948 inference time cost: 0.245584011078 inference time cost: 0.247354984283 inference time cost: 0.243114948273 inference time cost: 0.248263120651 inference time cost: 0.242746829987 inference time cost: 0.248802900314 inference time cost: 0.243211984634 inference time cost: 0.250336885452 inference time cost: 0.248804092407 inference time cost: 0.24892616272 inference time cost: 0.250190019608 inference time cost: 0.242742061615 inference time cost: 0.248377084732 inference time cost: 0.243380069733 inference time cost: 0.246010065079 inference time cost: 0.249619007111 inference time cost: 0.249279022217 inference time cost: 0.246186971664 inference time cost: 0.248100996017 inference time cost: 0.247294902802 inference time cost: 0.252123832703 inference time cost: 0.248955011368 inference time cost: 0.245960950851 inference time cost: 0.247789144516 inference time cost: 0.248814821243 inference time cost: 0.248847007751 inference time cost: 0.243139982224 inference time cost: 0.241555929184 inference time cost: 0.248006105423 inference time cost: 0.244719982147 inference time cost: 0.249027967453 inference time cost: 0.246644020081 inference time cost: 0.242850065231 inference time cost: 0.244970083237 inference time cost: 0.250116825104 inference time cost: 0.240103960037 inference time cost: 0.240646123886 inference time cost: 0.248036146164 inference time cost: 0.249511957169 inference time cost: 0.243547916412 inference time cost: 0.241662979126 inference time cost: 0.250388860703 inference time cost: 0.242209911346 inference time cost: 0.255487918854 inference time cost: 0.253228902817 inference time cost: 0.249907016754 inference time cost: 0.245906114578 inference time cost: 0.250648021698 inference time cost: 0.243247032166 inference time cost: 0.245744943619 inference time cost: 0.248219013214 inference time cost: 0.238717794418 inference time cost: 0.248986005783 inference time cost: 0.247745037079 inference time cost: 0.250778913498 inference time cost: 0.247437000275 inference time cost: 0.243986129761 inference time cost: 0.24107503891 inference time cost: 0.243803024292 inference time cost: 0.249688148499 inference time cost: 0.246629953384 inference time cost: 0.24747800827 inference time cost: 0.249664068222 inference time cost: 0.248086929321 inference time cost: 0.247761964798 inference time cost: 0.25776720047 inference time cost: 0.24661898613 inference time cost: 0.241526842117 inference time cost: 0.246001958847 inference time cost: 0.24324297905 inference time cost: 0.247370004654 inference time cost: 0.249162197113 inference time cost: 0.241807937622 inference time cost: 0.244575023651 inference time cost: 0.242347002029 inference time cost: 0.247106075287 inference time cost: 0.25005197525 inference time cost: 0.243155002594 inference time cost: 0.246999025345 inference time cost: 0.250269889832 inference time cost: 0.242332935333 inference time cost: 0.248075962067 inference time cost: 0.246267080307 inference time cost: 0.244854927063 inference time cost: 0.243191957474 inference time cost: 0.250638961792 inference time cost: 0.248178958893 inference time cost: 0.242247104645 inference time cost: 0.2445499897 inference time cost: 0.243799209595 inference time cost: 0.245778083801 inference time cost: 0.247901916504 inference time cost: 0.247987031937 inference time cost: 0.249349117279 inference time cost: 0.245792865753 inference time cost: 0.243059158325 inference time cost: 0.251693964005 inference time cost: 0.24273109436 inference time cost: 0.247684001923 inference time cost: 0.249033927917 inference time cost: 0.249193906784 inference time cost: 0.248933076859 inference time cost: 0.240727901459 inference time cost: 0.242040157318 inference time cost: 0.243910074234 inference time cost: 0.248558044434 inference time cost: 0.248029947281 inference time cost: 0.259281873703 inference time cost: 0.249243021011 inference time cost: 0.244272947311 inference time cost: 0.244670152664 inference time cost: 0.247740983963 inference time cost: 0.248981952667 inference time cost: 0.243573904037 inference time cost: 0.245356798172 inference time cost: 0.24452996254 inference time cost: 0.243452072144 inference time cost: 0.240988016129 inference time cost: 0.238821983337 inference time cost: 0.24883389473 inference time cost: 0.241976976395 inference time cost: 0.241474866867 inference time cost: 0.242697954178 inference time cost: 0.239205121994 inference time cost: 0.245494127274 inference time cost: 0.241542100906 inference time cost: 0.247225046158 inference time cost: 0.246723890305 inference time cost: 0.244396924973 inference time cost: 0.243910074234 inference time cost: 0.247702121735 inference time cost: 0.243394136429 inference time cost: 0.244313001633 inference time cost: 0.241513967514 inference time cost: 0.249053955078
Kindly suggest. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.