Giter Site home page Giter Site logo

pinto0309 / openvino2tensorflow Goto Github PK

View Code? Open in Web Editor NEW
332.0 16.0 40.0 11.84 MB

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.

License: MIT License

Python 99.11% Shell 0.82% Dockerfile 0.07%
openvino tensorflow pytorch onnx tensorflow-lite keras coreml tf-trt tensorrt edgetpu

openvino2tensorflow's Introduction

openvino2tensorflow

For those who lack skills in converting from ONNX to TensorFlow, I recommend using this tool. It is a tool in the making, so there are lots of bugs, but it is much easier than going through OpenVINO.

"Self-Created Tools to convert ONNX files (NCHW) to TensorFlow format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf)."

https://github.com/PINTO0309/onnx2tf




This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.

Special custom TensorFlow binaries and special custom TensorFLow Lite binaries are used.

Work in progress now.

Downloads GitHub PyPI CodeQL

render1629515758354

1. Environment

  • Python 3.8+
  • TensorFlow v2.10.0+
  • PyTorch v1.12.1+
  • TorchVision
  • TorchAudio
  • OpenVINO 2022.1.0
  • TensorRT 8.4.0+
  • trtexec
  • pycuda 2022.1
  • tensorflowjs
  • coremltools
  • paddle2onnx
  • onnx
  • onnxruntime-gpu (CUDA, TensorRT, OpenVINO)
  • onnxruntime-extensions
  • onnx_graphsurgeon
  • onnx-simplifier
  • onnxconverter-common
  • onnxmltools
  • onnx-tensorrt
  • tf2onnx
  • torch2trt
  • onnx-tf
  • tensorflow-datasets
  • tf_slim
  • edgetpu_compiler
  • tflite2tensorflow
  • openvino2tensorflow
  • simple-onnx-processing-tools
  • gdown
  • pandas
  • matplotlib
  • paddlepaddle
  • paddle2onnx
  • pycocotools
  • scipy
  • blobconverter
  • Intel-Media-SDK
  • Intel iHD GPU (iGPU) support
  • OpenCL
  • gluoncv
  • LLVM
  • NNPACK
  • WSL2 OpenCL

↥ Back to top

2. Use case

  • PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) ->

    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFJS (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TF-TRT (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> CoreML (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> ONNX (NHWC/NCHW)
    • -> openvino2tensorflow -> Myriad Inference Engine Blob (NCHW)
  • Caffe (NCHW) -> OpenVINO (NCHW) ->

    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFJS (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TF-TRT (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> CoreML (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> ONNX (NHWC/NCHW)
    • -> openvino2tensorflow -> Myriad Inference Engine Blob (NCHW)
  • MXNet (NCHW) -> OpenVINO (NCHW) ->

    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFJS (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TF-TRT (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> CoreML (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> ONNX (NHWC/NCHW)
    • -> openvino2tensorflow -> Myriad Inference Engine Blob (NCHW)
  • Keras (NHWC) -> OpenVINO (NCHW・Optimized) ->

    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFJS (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TF-TRT (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC) -> EdgeTPU (NHWC)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> CoreML (NHWC/NCHW)
    • -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> ONNX (NHWC/NCHW)
    • -> openvino2tensorflow -> Myriad Inference Engine Blob (NCHW)
  • saved_model -> saved_model_to_pb -> pb

  • saved_model ->

    • -> saved_model_to_tflite -> TFLite
    • -> saved_model_to_tflite -> TFJS
    • -> saved_model_to_tflite -> TF-TRT
    • -> saved_model_to_tflite -> EdgeTPU
    • -> saved_model_to_tflite -> CoreML
    • -> saved_model_to_tflite -> ONNX
  • pb -> pb_to_tflite -> TFLite

  • pb -> pb_to_saved_model -> saved_model

↥ Back to top

3. Supported Layers

  • Currently, there are problems with the Reshape and Transpose operation of 2D,3D,5D Tensor. Since it is difficult to accurately predict the shape of a simple shape change, I have added support for forced replacement of transposition parameters using JSON files. #6-7-replace-weights-or-constant-values-in-const-op-and-add-transpose-or-reshape-or-cast-or-squeeze-or-unsqueeze-or-add-or-multiply-just-beforeafter-the-operation-specified-by-layer_id

    Supported Layers

    No. OpenVINO Layer TF Layer Remarks
    1 Parameter Input Convert to NHWC (Default) or NCHW
    2 Const Constant, Bias
    3 Convolution Conv1D, Conv2D, Conv3D Conv3D has limited support
    4 Add Add
    5 ReLU ReLU
    6 PReLU PReLU Maximum(0.0,x)+Minimum(0.0,alpha*x)
    7 MaxPool MaxPool2D
    8 AvgPool AveragePooling1D, AveragePooling2D, AveragePooling3D
    9 GroupConvolution DepthwiseConv2D, Conv2D/Split/Concat
    10 ConvolutionBackpropData Conv2DTranspose, Conv3DTranspose Conv3DTranspose has limited support
    11 Concat Concat
    12 Multiply Multiply
    13 Tan Tan
    14 Tanh Tanh
    15 Elu Elu
    16 Sigmoid Sigmoid
    17 HardSigmoid hard_sigmoid
    18 SoftPlus SoftPlus
    19 Swish Swish You can replace swish and hard-swish with each other by using the "--replace_swish_and_hardswish" option
    20 Interpolate ResizeNearestNeighbor, ResizeBilinear 4D [N,H,W,C] or 5D [N,D,H,W,C]
    21 ShapeOf Shape
    22 Convert Cast
    23 StridedSlice Strided_Slice
    24 Pad Pad, MirrorPad
    25 Clamp ReLU6, Clip
    26 TopK ArgMax, top_k
    27 Transpose Transpose
    28 Squeeze Squeeze
    29 Unsqueeze Identity, expand_dims WIP
    30 ReduceMean reduce_mean
    31 ReduceMax reduce_max
    32 ReduceMin reduce_min
    33 ReduceSum reduce_sum
    34 ReduceProd reduce_prod
    35 Subtract Subtract
    36 MatMul MatMul
    37 Reshape Reshape
    38 Range Range WIP
    39 Exp Exp
    40 Abs Abs
    41 SoftMax SoftMax
    42 Negative Negative
    43 Maximum Maximum No broadcast
    44 Minimum Minimum No broadcast
    45 Acos Acos
    46 Acosh Acosh
    47 Asin Asin
    48 Asinh Asinh
    49 Atan Atan
    50 Atanh Atanh
    51 Ceiling Ceil
    52 Cos Cos
    53 Cosh Cosh
    54 Sin Sin
    55 Sinh Sinh
    56 Gather Gather
    57 Divide Divide, FloorDiv
    58 Erf Erf
    59 Floor Floor
    60 FloorMod FloorMod
    61 HSwish HardSwish x*ReLU6(x+3)*0.16666667, You can replace swish and hard-swish with each other by using the "--replace_swish_and_hardswish" option
    62 Log Log
    63 Power Pow No broadcast
    64 Mish Mish x*Tanh(softplus(x))
    65 Selu Selu
    66 Equal equal
    67 NotEqual not_equal
    68 Greater greater
    69 GreaterEqual greater_equal
    70 Less less
    71 LessEqual less_equal
    72 Select Select No broadcast
    73 LogicalAnd logical_and
    74 LogicalNot logical_not
    75 LogicalOr logical_or
    76 LogicalXor logical_xor
    77 Broadcast broadcast_to, ones, Multiply numpy / bidirectional mode, WIP
    78 Split Split
    79 VariadicSplit Split, Slice, SplitV
    80 MVN reduce_mean, sqrt, reduce_variance (x - reduce_mean(x)) / sqrt(reduce_variance(x) + eps)
    81 NonZero not_equal, boolean_mask
    82 ReduceL2 square, reduce_sum, sqrt
    83 SpaceToDepth SpaceToDepth
    84 DepthToSpace DepthToSpace
    85 Sqrt sqrt
    86 SquaredDifference squared_difference
    87 FakeQuantize subtract, multiply, round, greater, where, less_equal, add
    88 Tile tile
    89 GatherND gather_nd, reshape, cumprod, multiply, reduce_sum, gather, concat
    90 NonMaxSuppression non_max_suppression WIP. Only available for batch size 1.
    91 Gelu gelu
    92 NormalizeL2 tf.math.add, tf.math.l2_normalize x/sqrt(max(sum(x**2), eps)) or x/sqrt(add(sum(x**2), eps))
    93 ScatterElementsUpdate shape, rank, floormod, add, cast, range, expand_dims, meshgrid, concat, reshape, tensor_scatter_nd_update
    94 ROIAlign crop_and_resize, avg_pool, max_pool
    95 ScatterNDUpdate tensor_scatter_nd_update
    96 GatherElements rank, add, shape, cast, floormod, range, tensor_scatter_nd_update, constant, transpose, meshgrid, expand_dims, concat, gather_nd WIP
    97 ConvertLike Cast
    98 ReduceL1 Abs, ReduceSum
    99 ShuffleChannels reshape, transpose
    100 PriorBoxClustered Constant
    101 CumSum cumsum
    102 PriorBox Constant
    103 ReverseSequence reverse
    104 ExtractImagePatches extract_patches
    105 LogSoftmax reduce_max, log, reduce_sum, exp
    106 Einsum einsum
    107 ScatterUpdate scatter_update
    108 Result Identity Output

↥ Back to top

4. Setup

4-1. [Environment construction pattern 1] Execution by Docker (strongly recommended)

You do not need to install any packages other than Docker. It consumes 23.4GB of storage.

$ docker pull ghcr.io/pinto0309/openvino2tensorflow:latest
or
# $ mv .dockerignore d
# $ docker build \
# -t ghcr.io/pinto0309/openvino2tensorflow:base.11.7.1-cudnn8-tf2.10.0-trt8.4.3-openvino2022.1.0 \
# -f Dockerfile.base .
# $ mv d .dockerignore
$ docker build --no-cache -t ghcr.io/pinto0309/openvino2tensorflow:latest .

# If you don't need to access the GUI of the HostPC and the USB camera.
$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  ghcr.io/pinto0309/openvino2tensorflow:latest

# If conversion to TF-TRT is not required. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/openvino2tensorflow:latest

# If you need to convert to TF-TRT. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run --gpus all -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/openvino2tensorflow:latest

# If you are using iGPU (OpenCL). And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e LIBVA_DRIVER_NAME=iHD \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/openvino2tensorflow:latest

↥ Back to top

4-2. [Environment construction pattern 2] Execution by Host machine

To install using the Python Package Index (PyPI), use the following command.

$ pip3 install --user --upgrade openvino2tensorflow

To install with the latest source code of the main branch, use the following command.

$ pip3 install --user --upgrade git+https://github.com/PINTO0309/openvino2tensorflow

↥ Back to top

5. Usage

5-1. openvino to tensorflow convert

usage: openvino2tensorflow
  [-h]
  --model_path MODEL_PATH
  [--model_output_path MODEL_OUTPUT_PATH]
  [--output_saved_model]
  [--output_h5]
  [--output_weight_and_json]
  [--output_pb]
  [--output_no_quant_float32_tflite]
  [--output_dynamic_range_quant_tflite]
  [--output_weight_quant_tflite]
  [--output_float16_quant_tflite]
  [--output_integer_quant_tflite]
  [--output_full_integer_quant_tflite]
  [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
  [--calib_ds_type CALIB_DS_TYPE]
  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
  [--tfds_download_flg]
  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
  [--output_tfjs]
  [--output_tftrt_float32]
  [--output_tftrt_float16]
  [--tftrt_maximum_cached_engines TFTRT_MAXIMUM_CACHED_ENGINES]
  [--output_coreml]
  [--output_edgetpu]
  [--edgetpu_compiler_timeout EDGETPU_COMPILER_TIMEOUT]
  [--edgetpu_num_segments EDGETPU_NUM_SEGMENTS]
  [--output_onnx]
  [--onnx_opset ONNX_OPSET]
  [--onnx_extra_opset ONNX_EXTRA_OPSET]
  [--disable_onnx_nchw_conversion]
  [--disable_onnx_optimization]
  [--output_myriad]
  [--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES]
  [--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES]
  [--replace_swish_and_hardswish]
  [--optimizing_hardswish_for_edgetpu]
  [--replace_prelu_and_minmax]
  [--replace_argmax]
  [--replace_argmax_indices_to_float32]
  [--restricted_resize_image_mode]
  [--weight_replacement_config WEIGHT_REPLACEMENT_CONFIG]
  [--disable_experimental_new_quantizer]
  [--disable_per_channel]
  [--optimizing_barracuda]
  [--layerids_of_the_terminating_output LAYERIDS_OF_THE_TERMINATING_OUTPUT]
  [--keep_input_tensor_in_nchw]
  [--input_as_ncdhw]
  [--non_verbose]

optional arguments:
  -h, --help
              show this help message and exit
  --model_path MODEL_PATH
              input IR model path (.xml)
  --model_output_path MODEL_OUTPUT_PATH
              The output folder path of the converted model file
  --output_saved_model
              saved_model output switch
  --output_h5
              .h5 output switch
  --output_weight_and_json
              weight of h5 and json output switch
  --output_pb
              .pb output switch
  --output_no_quant_float32_tflite
              float32 tflite output switch
  --output_dynamic_range_quant_tflite
              dynamic range quant tflite output switch
  --output_weight_quant_tflite
              weight quant tflite output switch
  --output_float16_quant_tflite
              float16 quant tflite output switch
  --output_integer_quant_tflite
              integer quant tflite output switch
  --output_full_integer_quant_tflite
              full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
              Input and output types when doing Integer Quantization
              ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
              String formulas for normalization. It is evaluated by
              Pythons eval() function.
              Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
              Types of data sets for calibration. tfds or numpy
              Default: numpy
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
              Dataset name for TensorFlow Datasets for calibration.
              https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
              Split name for TensorFlow Datasets for calibration.
              https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
              Download destination folder path for the calibration
              dataset. Default: $HOME/TFDS
  --tfds_download_flg
              True to automatically download datasets from
              TensorFlow Datasets. True or False
  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
              The path from which to load the .npy file containing
              the numpy binary version of the calibration data.
              Default: sample_npy/calibration_data_img_sample.npy
  --output_tfjs
              tfjs model output switch
  --output_tftrt_float32
              tftrt float32 model output switch
  --output_tftrt_float16
              tftrt float16 model output switch
  --tftrt_maximum_cached_engines
              Specifies the quantity of tftrt_maximum_cached_engines for TFTRT.
              Default: 10000
  --output_coreml
              coreml model output switch
  --output_edgetpu
              edgetpu model output switch
  --edgetpu_compiler_timeout
              edgetpu_compiler timeout for one compilation process in seconds.
              Default: 3600
  --edgetpu_num_segments
              Partition the model into 'num_segments' segments.
              Default: 1 (no partition)
  --output_onnx
              onnx model output switch
  --onnx_opset ONNX_OPSET
              onnx opset version number
  --onnx_extra_opset ONNX_EXTRA_OPSET
              The name of the onnx 'extra_opset' to enable.
              Default: ''
              'com.microsoft:1' or 'ai.onnx.contrib:1' or 'ai.onnx.ml:1'
  --disable_onnx_nchw_conversion
              Disable NCHW conversion
  --disable_onnx_optimization
              Disable onnx optimization
  --output_myriad
              myriad inference engine blob output switch
  --vpu_number_of_shaves VPU_NUMBER_OF_SHAVES
              vpu number of shaves. Default: 4
  --vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES
              vpu number of cmx slices. Default: 4
  --replace_swish_and_hardswish
              Replace swish and hard-swish with each other
  --optimizing_hardswish_for_edgetpu
              Optimizing hardswish for edgetpu
  --replace_prelu_and_minmax
              Replace prelu and minimum/maximum with each other
  --replace_argmax
              Replace 'ArgMax (TopK)' with a primitive operation.
              Optimizes 'ArgMax' to EdgeTPU. If you have 'ArgMax' at the end of your model,
              use the '--replace_argmax_indices_to_float32' option together.
  --replace_argmax_indices_to_float32
              Enabling this option may allow full mapping to EdgeTPU when 'ArgMax (TopK)'
              is at the end of the model for tasks such as SemanticSegmentation.
              If you apply it to 'ArgMax (TopK)', which is located in the middle of the model,
              the model transformation is more likely to fail.
  --restricted_resize_image_mode
              Specify this if the upsampling contains OPs that are
              not scaled by integer multiples. Optimization for
              EdgeTPU will be disabled.
  --weight_replacement_config WEIGHT_REPLACEMENT_CONFIG
              Replaces the value of Const for each layer_id defined
              in json. Specify the path to the json file.
              'weight_replacement_config.json'
  --disable_experimental_new_quantizer
              Disable MLIRs new quantization feature during INT8 quantization
              in TensorFlowLite.
  --disable_per_channel
              Disable per-channel quantization for tflite.
  --optimizing_barracuda
              Generates ONNX by replacing Barracuda unsupported layers
              with standard layers. For example, GatherND.
  --layerids_of_the_terminating_output LAYERIDS_OF_THE_TERMINATING_OUTPUT
              A comma-separated list of layerIDs to be used as output layers.
              e.g. --layerids_of_the_terminating_output 100,201,560
              Default: ''
  --keep_input_tensor_in_nchw
              Does not convert the input to NHWC, but keeps the NCHW format.
              Transpose is inserted right after the input layer, and
              the model internals are handled by NHWC. Only 4D input is supported.
  --input_as_ncdhw
              Specify when the shape of INPUT is the 5D tensor of NCDHW.
              When converting to TensorFlow, the input geometry is automatically
              converted to NDHWC format.
  --non_verbose
              Do not show all the weight information of each layer in the
              conversion log.

↥ Back to top

5-2. saved_model to tflite convert

usage: saved_model_to_tflite
  [-h]
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
  [--signature_def SIGNATURE_DEF]
  [--input_shapes INPUT_SHAPES]
  [--model_output_dir_path MODEL_OUTPUT_DIR_PATH]
  [--output_no_quant_float32_tflite]
  [--output_dynamic_range_quant_tflite]
  [--output_weight_quant_tflite]
  [--output_float16_quant_tflite]
  [--output_integer_quant_tflite]
  [--output_full_integer_quant_tflite]
  [--output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE]
  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
  [--calib_ds_type CALIB_DS_TYPE]
  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
  [--tfds_download_flg]
  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
  [--output_tfjs]
  [--output_tftrt_float32]
  [--output_tftrt_float16]
  [--tftrt_maximum_cached_engines TFTRT_MAXIMUM_CACHED_ENGINES]
  [--output_coreml]
  [--output_edgetpu]
  [--edgetpu_compiler_timeout EDGETPU_COMPILER_TIMEOUT]
  [--edgetpu_num_segments EDGETPU_NUM_SEGMENTS]
  [--output_onnx]
  [--onnx_opset ONNX_OPSET]
  [--onnx_extra_opset ONNX_EXTRA_OPSET]
  [--disable_onnx_nchw_conversion]
  [--disable_onnx_optimization]
  [--disable_experimental_new_quantizer]
  [--disable_per_channel]

optional arguments:
  -h, --help
              show this help message and exit
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
              Input saved_model dir path
  --signature_def SIGNATURE_DEF
              Specifies the signature name to load from saved_model
  --input_shapes INPUT_SHAPES
              Overwrites an undefined input dimension (None or -1).
              Specify the input shape in [n,h,w,c] format.
              For non-4D tensors, specify [a,b,c,d,e], [a,b], etc.
              A comma-separated list if there are multiple inputs.
              (e.g.) --input_shapes [1,256,256,3],[1,64,64,3],[1,2,16,16,3]
  --model_output_dir_path MODEL_OUTPUT_DIR_PATH
              The output folder path of the converted model file
  --output_no_quant_float32_tflite
              float32 tflite output switch
  --output_dynamic_range_quant_tflite
              dynamic range quant tflite output switch
  --output_weight_quant_tflite
              weight quant tflite output switch
  --output_float16_quant_tflite
              float16 quant tflite output switch
  --output_integer_quant_tflite
              integer quant tflite output switch
  --output_full_integer_quant_tflite
              full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
              Input and output types when doing Integer Quantization
              ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
              String formulas for normalization. It is evaluated by
              Pythons eval() function.
              Default: '(data - [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
              Types of data sets for calibration. tfds or numpy
              Default: numpy
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
              Dataset name for TensorFlow Datasets for calibration.
              https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
              Split name for TensorFlow Datasets for calibration.
              https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
              Download destination folder path for the calibration
              dataset. Default: $HOME/TFDS
  --tfds_download_flg
              True to automatically download datasets from
              TensorFlow Datasets. True or False
  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
              The path from which to load the .npy file containing
              the numpy binary version of the calibration data.
              Default: sample_npy/calibration_data_img_sample.npy
  --output_tfjs
              tfjs model output switch
  --output_tftrt_float32
              tftrt float32 model output switch
  --output_tftrt_float16
              tftrt float16 model output switch
  --tftrt_maximum_cached_engines
              Specifies the quantity of tftrt_maximum_cached_engines for TFTRT.
              Default: 10000
  --output_coreml
              coreml model output switch
  --output_edgetpu
              edgetpu model output switch
  --edgetpu_compiler_timeout
              edgetpu_compiler timeout for one compilation process in seconds.
              Default: 3600
  --edgetpu_num_segments
              Partition the model into 'num_segments' segments.
              Default: 1 (no partition)
  --output_onnx
              onnx model output switch
  --onnx_opset ONNX_OPSET
              onnx opset version number
  --onnx_extra_opset ONNX_EXTRA_OPSET
              The name of the onnx 'extra_opset' to enable.
              Default: ''
              'com.microsoft:1' or 'ai.onnx.contrib:1' or 'ai.onnx.ml:1'
  --disable_onnx_nchw_conversion
              Disable NCHW conversion
  --disable_onnx_optimization
              Disable onnx optimization
  --disable_experimental_new_quantizer
              Disable MLIRs new quantization feature during INT8 quantization
              in TensorFlowLite.
  --disable_per_channel
              Disable per-channel quantization for tflite.

↥ Back to top

5-3. pb to saved_model convert

usage: pb_to_saved_model
  [-h]
  --pb_file_path PB_FILE_PATH
  --inputs INPUTS
  --outputs OUTPUTS
  [--model_output_path MODEL_OUTPUT_PATH]

optional arguments:
  -h, --help
              show this help message and exit
  --pb_file_path PB_FILE_PATH
              Input .pb file path (.pb)
  --inputs INPUTS
              (e.g.1) input:0,input:1,input:2
              (e.g.2) images:0,input:0,param:0
  --outputs OUTPUTS
              (e.g.1) output:0,output:1,output:2
              (e.g.2) Identity:0,Identity:1,output:0
  --model_output_path MODEL_OUTPUT_PATH
              The output folder path of the converted model file

↥ Back to top

5-4. pb to tflite convert

usage: pb_to_tflite
  [-h]
  --pb_file_path PB_FILE_PATH
  --inputs INPUTS
  --outputs OUTPUTS
  [--model_output_path MODEL_OUTPUT_PATH]

optional arguments:
  -h, --help
              show this help message and exit
  --pb_file_path PB_FILE_PATH
              Input .pb file path (.pb)
  --inputs INPUTS
              (e.g.1) input,input_1,input_2
              (e.g.2) images,input,param
  --outputs OUTPUTS
              (e.g.1) output,output_1,output_2
              (e.g.2) Identity,Identity_1,output
  --model_output_path MODEL_OUTPUT_PATH
              The output folder path of the converted model file

↥ Back to top

5-5. saved_model to pb convert

usage: saved_model_to_pb
  [-h]
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
  [--model_output_dir_path MODEL_OUTPUT_DIR_PATH]
  [--signature_name SIGNATURE_NAME]

optional arguments:
  -h, --help
              show this help message and exit
  --saved_model_dir_path SAVED_MODEL_DIR_PATH
              Input saved_model dir path
  --model_output_dir_path MODEL_OUTPUT_DIR_PATH
              The output folder path of the converted model file (.pb)
  --signature_name SIGNATURE_NAME
              Signature name to be extracted from saved_model

↥ Back to top

5-6. Extraction of IR weight

usage: ir_weight_extractor
  [-h]
  -m MODEL
  -o OUTPUT_PATH

optional arguments:
  -h, --help
              show this help message and exit
  -m MODEL, --model MODEL
              input IR model path
  -o OUTPUT_PATH, --output_path OUTPUT_PATH
              weights output folder path

↥ Back to top

6. Execution sample

6-1. Conversion of OpenVINO IR to Tensorflow models

OutOfMemory may occur when converting to saved_model or h5 when the file size of the original model is large, please try the conversion to a pb file alone.

$ openvino2tensorflow \
  --model_path openvino/448x448/FP32/Resnet34_3inputs_448x448_20200609.xml \
  --output_saved_model \
  --output_pb \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_no_quant_float32_tflite

↥ Back to top

6-2. Convert Protocol Buffer (.pb) to saved_model

This tool is useful if you want to check the internal structure of pb files, tflite files, .h5 files, coreml files and IR (.xml) files. https://lutzroeder.github.io/netron/

$ pb_to_saved_model \
  --pb_file_path model_float32.pb \
  --inputs inputs:0 \
  --outputs Identity:0

↥ Back to top

6-3. Convert Protocol Buffer (.pb) to tflite

$ pb_to_tflite \
  --pb_file_path model_float32.pb \
  --inputs inputs \
  --outputs Identity,Identity_1,Identity_2

↥ Back to top

6-4. Convert saved_model to Protocol Buffer (.pb)

$ saved_model_to_pb \
  --saved_model_dir_path saved_model \
  --model_output_dir_path pb_from_saved_model \
  --signature_name serving_default

↥ Back to top

6-5. Converts saved_model to OpenVINO IR

$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo_tf.py \
  --saved_model_dir saved_model \
  --output_dir openvino/reverse

↥ Back to top

6-6. Checking the structure of saved_model

$ saved_model_cli show \
  --dir saved_model \
  --tag_set serve \
  --signature_def serving_default

↥ Back to top

6-7. Replace weights or constant values in Const OP, and add Transpose or Reshape or Cast or Squeeze or Unsqueeze or Add or Multiply just before/after the operation specified by layer_id

6-7-1. Overview

If the transformation behavior of Reshape, Transpose, etc. does not go as expected, you can force the Const content to change by defining weights and constant values in a JSON file and having it read in. Alternatively, Transpose or Reshape or Cast or Squeeze or Unsqueeze or Add or Multiply can be added just before the operation specified by layer_id. After changing the structure, you need to carefully check the consistency of Reshape, Transpose and Interpolate before and after. Even if the model is successfully transformed, there is a possibility that the dimension that should be changed is transformed incorrectly. In particular, Reshape and Interpolate are often able to transform the model even if the number of elements in the dimension is messed up.

$ openvino2tensorflow \
  --model_path xxx.xml \
  --output_saved_model \
  --output_pb \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_no_quant_float32_tflite \
  --weight_replacement_config weight_replacement_config_sample.json

Structure of JSON sample

{
    "format_version": 2,
    "layers": [
        {
            "layer_id": "659",
            "type": "Const",
            "replace_mode": "direct",
            "values": [
                0,
                1,
                2
            ]
        },
        {
            "layer_id": "660",
            "type": "Reshape",
            "replace_mode": "insert_after",
            "values": [
                2100,
                85
            ]
        },
        {
            "layer_id": "680",
            "type": "Cast",
            "replace_mode": "insert_after",
            "values": "i64"
        },
        {
            "layer_id": "442",
            "type": "Concat",
            "replace_mode": "change_axis",
            "values": 4
        },
        {
            "layer_id": "450",
            "type": "SoftMax",
            "replace_mode": "change_axis",
            "values": 2
        },
        {
            "layer_id": "500",
            "type": "StridedSlice",
            "replace_mode": "change_attributes",
            "values": [
                0,
                0,
                0,
                0,
                0
            ]
        },
        {
            "layer_id": "550",
            "type": "StridedSlice",
            "replace_mode": "replace",
            "values": [
                [0,0,0,8],
                [2,7,11,16],
                [1,1,1,1],
                0,
                0,
                0,
                0,
                0
            ]
        },
        {
            "layer_id": "600",
            "type": "MaxPool",
            "replace_mode": "change_padding_mode",
            "values": "REFLECT"
        },
        {
            "layer_id": "720",
            "type": "PReLU",
            "replace_mode": "change_shared_axes",
            "values": [
                1,
                2
            ]
        },
        {
            "layer_id": "800",
            "type": "ReverseSequence",
            "replace_mode": "change_seq_axis",
            "values": 2
        },
        {
            "layer_id": "850",
            "type": "Squeeze",
            "replace_mode": "insert_after",
            "values": 1
        },
        {
            "layer_id": "900",
            "type": "Unsqueeze",
            "replace_mode": "insert_before",
            "values": 2
        },
        {
            "layer_id": "1000",
            "type": "Einsum",
            "replace_mode": "change_equation",
            "values": "vu,nctu->nctv"
        },
        {
            "layer_id": "1005",
            "type": "Add",
            "replace_mode": "insert_after",
            "values": [
                0,
                0,
                0,
                2
            ]
        },
        {
            "layer_id": "1010",
            "type": "Multiply",
            "replace_mode": "insert_after",
            "values": [
                1.0,
                1.0,
                -0.5,
                1.0
            ]
        }
    ]
}
No. Elements Description
1 format_version Format version of weight_replacement_config. Values less than or equal to 2.
2 layers A list of layers. Enclose it with "[ ]" to define multiple layers to child elements.
2-1 layer_id ID of the Const layer whose weight/constant parameter is to be swapped. The important thing to note is that you cannot create multiple settings for a single layer_id. There should always be a single setting for a single layer_id. For example, specify "1123" for layer id="1123" for type="Const" in .xml.
Screenshot 2021-02-08 01:06:30
2-2 type Fixed value replacement or type of operation to be added. "Const" or "Transpose" or "Reshape" or "Cast" or "Concat" or "SoftMax" or "StridedSlice" or "MaxPool" or "PReLU" or "ReverseSequence" or "Squeeze" or "Unsqueeze" or "LogSoftmax" or "Einsum" or "Add" or "Multiply"
2-3 replace_mode "direct" or "npy" or "insert_before" or "insert_after" or "change_axis" or "change_attributes".
"direct": Specify the values of the Numpy matrix directly in the "values" attribute. Ignores the values recorded in the .bin file and replaces them with the values specified in "values".
Screenshot 2021-08-10 23:16:05
"npy": Load a Numpy binary file with the matrix output by np.save('xyz', a). The "values" attribute specifies the path to the Numpy binary file.
Screenshot 2021-08-10 23:17:22
"insert_before": Add Transpose or Reshape or Cast or Squeeze or Unsqueeze or Add or Multiply just before the operation specified by layer_id. Note that when Squeeze and Unsqueeze are specified, the value to set for "values" is the axis of the dimension operation target.
Screenshot 2021-09-16 14:17:22
"insert_after": Add Transpose or Reshape or Cast or Squeeze or Unsqueeze or Add or Multiply just after the operation specified by layer_id. Note that when Squeeze and Unsqueeze are specified, the value to set for "values" is the axis of the dimension operation target.
Screenshot 2021-08-10 23:12:52
"change_axis": Changes the axis of the Concat or SoftMax or ShuffleChannels or LogSoftmax attribute value.
Screenshot 2021-10-17 01:16:22
"change_attributes": Changes the ATTRIBUTES of the StridedSlice attribute value. Specify five values in numerical list format in the order of begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask.
Screenshot 2021-11-19 11:54:27
"replace": Replaces OP by specifying parameters directly in TensorFlow Strided_Slice specification. begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask https://www.tensorflow.org/api_docs/python/tf/strided_slice
image
"change_padding_mode": Change the padding mode of MaxPool.
Screenshot 2021-12-04 01:37:53
"change_shared_axes": Changed shared_axes in PReLU.
Screenshot 2021-12-04 22:22:31
"change_batch_axis","change_seq_axis": Changed axis in ReverseSequence.
Screenshot 2021-12-12 13:43:30
"change_equation": Changed equation in Einsum.
20220131183853
2-4 values Specify the value or the path to the Numpy binary file to replace the weight/constant value recorded in .bin. The way to specify is as described in the description of 'replace_mode'. The table below shows the correspondence between the strings that can be specified for the "Cast" operation and the TensorFlow types. In most cases, you will probably only use "i32", "i64", "f32", and "f16".
Screenshot 2021-08-22 01:48:43
change_padding_mode: "ZERO" or "SYMMETRIC" or "REFLECT". https://www.tensorflow.org/api_docs/python/tf/pad
change_shared_axes: https://www.tensorflow.org/api_docs/python/tf/keras/layers/PReLU
change_batch_axis, change_seq_axis: https://docs.openvino.ai/2021.4/openvino_docs_ops_movement_ReverseSequence_1.html
"change_equation": https://numpy.org/doc/stable/reference/generated/numpy.einsum.html

↥ Back to top

6-7-2. Example

  • YOLOX Nano 320x320 (NCHW format)
  • yolox_nano_320x320.xml
  • yolox_nano_320x320.bin
  1. Let's assume that you don't need Transpose in the final layer of the model. Here you have [1, 85, 2100] as input, and the original OpenVINO model transposes [0, 2, 1] in that order to obtain the tensor [1, 2100, 85]. This figure shows the visualization of a yolox_nano_320x320.xml file using Netron. The number shown in the OUTPUTS - output - name: is the layer ID of Transpose. The layer ID 660 is the number in the part before the colon. The number in the part after the colon is called the port number 2. However, what you are trying to change is the transposition parameter of the INPUTS - custom - name: part. The name of the parameter you are trying to change is 625. Note that 625 is not a layer ID, just a name. Screenshot 2021-08-04 23:45:15
  2. Check the model structure as recorded in .xml. First, open yolox_nano_320x320.xml in your favorite IDE. Screenshot 2021-08-05 00:00:38 Screenshot 2021-08-05 00:08:50
  3. Search for to-layer="660" (Transpose) in the IDE. In the figure below, Layer ID 658 and Layer ID 659 are represented as input values connected to Layer ID 660. Screenshot 2021-08-05 00:17:31

In the figure below, one of them is 658 and one of them is 659. It is difficult to determine exactly what it is from the image alone. You must again note that 658:3 in the image is only a name, not a layer ID. It is worth noting here that the type of value you want to replace is Const.

Screenshot 2021-08-05 00:26:29

  1. Now you will search for layer ID "658" in the IDE. The type is "Concat", so the desired layer was not this one. What you are looking for is "Const". Screenshot 2021-08-05 01:02:00
  2. Now, search for layer ID 659 in the IDE. The type is "Const". Now you can finally identify that the layer ID of the layer you want to replace is 659. Screenshot 2021-08-05 01:05:33
  3. Create a JSON file to replace the constants [0, 2, 1] with [0, 1, 2], and you can use any name for the JSON file. Suppose you save the file with the name replace.json. If you want to replace it with a numpy matrix, specify "npy" for "replace_mode": and the path to the .npy file for "values":.
{
  "format_version": 2,
  "layers": [
      {
          "layer_id": "659",
          "type": "Const",
          "replace_mode": "direct",
          "values": [
              0,
              1,
              2
          ]
      }
  ]
}
{
  "format_version": 2,
  "layers": [
      {
          "layer_id": "659",
          "type": "Const",
          "replace_mode": "npy",
          "values": "path/to/your/xxx.npy"
      }
  ]
}
  1. Specify the created JSON file as the argument of the --weight_replacement_config parameter of the conversion command and execute it. This is the end of the explanation of how to replace weights and constants.
$ openvino2tensorflow \
--model_path yolox_nano_320x320.xml \
--output_saved_model \
--output_pb \
--output_no_quant_float32_tflite \
--weight_replacement_config replace.json

↥ Back to top

6-8. Check the contents of the .npy file, which is a binary version of the image file

$ view_npy --npy_file_path sample_npy/calibration_data_img_sample.npy

Press the Q button to display the next image. calibration_data_img_sample.npy contains 20 images extracted from the MS-COCO data set.

aaa

↥ Back to top

6-9. Sample image of a conversion error message

Since it is very difficult to mechanically predict the correct behavior of Transpose and Reshape, errors like the one shown below may occur. Using the information in the figure below, try several times to force the replacement of constants and weights using the --weight_replacement_config option #6-7-replace-weights-or-constant-values-in-const-op-and-add-transpose-or-reshape-or-cast-or-squeeze-or-unsqueeze-or-add-or-multiply-just-beforeafter-the-operation-specified-by-layer_id. This is a very patient process, but if you take the time, you should be able to convert it correctly. error_sample2

↥ Back to top

6-10. Ability to specify an output layer for debugging the output values of the model

If you want to debug the output values of each layer, specify multiple layer IDs separated by commas in the --layerids_of_the_terminating_output option to check the output values. For example, if you want to debug the output values of two layers, LayerID=1007 (Add) and LayerID=1214 (Sigmoid), as shown in the figure below, specify as --layerids_of_the_terminating_output 1007,1214. Screenshot 2021-09-06 21:33:19 Screenshot 2021-09-06 21:33:28 When you convert a model, the output will be censored at the two specified layer IDs, and the model will be generated with the output of the model available for review. Note that if you specify a layer ID for an operation that has multiple outputs, such as Split, VariadicSplit, TopK, or NonMaxSuppression, all output values will be used as outputs. Screenshot 2021-09-06 21:43:17

↥ Back to top

7. Output sample

Screenshot 2020-10-16 00:08:40

↥ Back to top

8. Model Structure

https://digital-standard.com/threedpose/models/Resnet34_3inputs_448x448_20200609.onnx

ONNX (NCHW) OpenVINO (NCHW) TFLite (NHWC)
Resnet34_3inputs_448x448_20200609 onnx_ Resnet34_3inputs_448x448_20200609 xml model_float32 tflite

↥ Back to top

9. My article

↥ Back to top

10. Conversion Confirmed Models

  1. u-2-net
  2. mobilenet-v2-pytorch
  3. midasnet
  4. footprints
  5. efficientnet-b0-pytorch
  6. efficientdet-d0
  7. dense_depth
  8. deeplabv3
  9. colorization-v2-norebal
  10. age-gender-recognition-retail-0013
  11. resnet
  12. arcface
  13. emotion-ferplus
  14. mosaic
  15. retinanet
  16. shufflenet-v2
  17. squeezenet
  18. version-RFB-320
  19. yolov4
  20. yolov4x-mish
  21. ThreeDPoseUnityBarracuda - Resnet34_3inputs_448x448
  22. efficientnet-lite4
  23. nanodet
  24. yolov4-tiny
  25. yolov5s
  26. yolact
  27. MiDaS v2
  28. MODNet
  29. Person Reidentification
  30. DeepSort
  31. DINO (Transformer)

↥ Back to top

openvino2tensorflow's People

Contributors

khursani8 avatar pinto0309 avatar travisjayday avatar zye1996 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openvino2tensorflow's Issues

Raspberry Pi 3B+

Capture
I am very new to linux OS please help.
I followed your steps to install Tensorflow on my RPi 3B+ but got this,
bash: ./tensorflow-2.5.0-cp37-none-linux_armv7l_download.sh: Permission denied

tensorflow conversion error

1. macOS,

2. Version of OpenVINO e.g. 2021.3.185, etc

3. Version of TensorFlow e.g. v2.5.0tc

openvino2tensorflow --model_path scrfd_500m_bnkps_shape320x320.xml --output_pb

Issue Details

ValueError: Depth of input (5) is not a multiple of input depth of filter (3) for '{{node tf.nn.conv2d/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], explicit_paddings=[], padding="VALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true](Placeholder, tf.nn.conv2d/Conv2D/filter)' with input shapes: [1,320,322,5], [16,3,3,3].

Can you give me some suggestions?

Cannot convert YoloV5s to EdgeTPU

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc

Ubutnu 18.0.4

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

x86_64

3. Version of OpenVINO e.g. 2021.2.185, etc

2021.2.185

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc

v2.3.1 and v.2.4.1 tried

5. Download URL for ONNX model, OpenVINO model, pt checkpoint, Tensorflow convertions

https://drive.google.com/drive/folders/1Etu0P_ioTFPCK6AmeCufSYADpJ-a6FjE?usp=sharing

Python 3.6.12 using in a conda environment

13. Issue Details

I am trying to convert the pytorch model Yolov5s, implemented from ultralitics, to tflite to further compile it to a TPU.
I have trained this model on my dataset resulting in the checkpoint "best.pt". I converted this .pt file to the ONNX format using the ultralitics package, as specified here. After that I followed your guide, meaning I optimized the ONNX model, then converted the optimized ONNX to OpenVINO and finally to Tensorflow.

My first problem is in the last step, when using openvino2tensorflow. I cannot generate the SavedModel nor the h5 formats.
When I run the command:

openvino2tensorflow --model_path best_opt.xml --model_output_path Models --output_saved_model True --output_h5 True --output_pb True --output_integer_quant_tflite True

The error part is the following:

...

tf.compat.v1.transpose (TFOpLam (1, 40, 3, 6, 40)    0           tf.reshape[0][0]                 
__________________________________________________________________________________________________
tf.compat.v1.transpose_1 (TFOpL (1, 20, 3, 6, 20)    0           tf.reshape_1[0][0]               
__________________________________________________________________________________________________
tf.compat.v1.transpose_2 (TFOpL (1, 80, 3, 6, 80)    0           tf.reshape_2[0][0]               
__________________________________________________________________________________________________
tf.identity (TFOpLambda)        (1, 40, 3, 6, 40)    0           tf.compat.v1.transpose[0][0]     
__________________________________________________________________________________________________
tf.identity_1 (TFOpLambda)      (1, 20, 3, 6, 20)    0           tf.compat.v1.transpose_1[0][0]   
__________________________________________________________________________________________________
tf.identity_2 (TFOpLambda)      (1, 80, 3, 6, 80)    0           tf.compat.v1.transpose_2[0][0]   
==================================================================================================
Total params: 7,233,672
Trainable params: 7,233,672
Non-trainable params: 0
__________________________________________________________________________________________________
TensorFlow/Keras model building process complete!
saved_model output started ==========================================================
ERROR: can't pickle module objects
Traceback (most recent call last):
  File "src_script.py", line 1789, in convert
    tf.saved_model.save(model, model_output_path)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 1033, in save
    obj, signatures, options, meta_graph_def)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 1198, in _build_meta_graph
    return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 1163, in _build_meta_graph_impl
    asset_info.asset_index)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 755, in _serialize_object_graph
    saveable_view.function_name_map)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 800, in _write_object_proto
    metadata=obj._tracking_metadata)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 3079, in _tracking_metadata
    return self._trackable_saved_model_saver.tracking_metadata
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 55, in tracking_metadata
    return json_utils.Encoder().encode(self.python_properties)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 41, in python_properties
    return self._python_properties_internal()
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 35, in _python_properties_internal
    metadata = super(ModelSavedModelSaver, self)._python_properties_internal()
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 59, in _python_properties_internal
    metadata.update(get_config(self.obj))
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 118, in get_config
    config = generic_utils.serialize_keras_object(obj)['config']
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 245, in serialize_keras_object
    config = instance.get_config()
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py", line 650, in get_config
    return copy.deepcopy(get_network_config(self))
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/home/luis/anaconda3/envs/yolo2lite/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle module objects
Switch to the output of an optimized protocol buffer file (.pb).

...

As you can see it fails to convert to a SavedModel with the error "TypeError: can't pickle module objects". Do you have any idea what might be the problem?
The .pb files are generated and the quantization version are generated as well. I even tried to compile the "output_integer_quant_tflite" and "output_full_integer_quant_tflite" tflite version using the edgetpu_compiler however it Aborts the compilation.

$ edgetpu_compiler -sa Models/model_integer_quant.tflite
Edge TPU Compiler version 15.0.340273435

Internal compiler error. Aborting!

I have also tried to open the resulting .xml file, from the openvino conversion, using the application Netron, and the Network seems to be normal. Here are some pictures of the begging and end of my xml.

Beginning
Begin_nn

Ending
end_nn

Do you have any idea what might be causing the SavedModel conversion error and then the EdgeTPU compilation failure? I suspect the second is related to the first.

Any sugestion is more than welcome!

[Question] Resizeable models

Hi.
The openvino support resizable models. You can change input shape and outputs shape will change too.
Is it possible convert such models?

Incorrect connection order in Concat

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc: Ubuntu20.04 (on Docker)

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc: x86_64

3. Version of OpenVINO e.g. 2021.2.185, etc: 2021.4.582

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc: v2.6.0-rc1

5. Version of TensorRT e.g. TensorRT6.0 GA, etc: 8.0

10. Download URL for OpenVINO IR (.bin/.xml) model

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/132_YOLOX

11. URL of the repository from which the transformed model was taken:

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/132_YOLOX

13. Issue Details

In models such as YOLOv5, YOLOR, and YOLOX, the connection order of input layers is different when the starting point is concat via multiple StridedSlices. Therefore, even if the structure of the intermediate layer is correctly converted, the data at the input will be corrupted, and the output result will be meaningless. The order of tensor decomposition and the order of concatenation by Concat need to be controlled correctly.

  • ONNX (Before conversion)
    Screenshot 2021-07-28 14:00:21

  • TFLite (openvino2tensorflow / after conversion)
    Screenshot 2021-07-28 13:58:58

  • TFLite (onnx-tensorflow / after convertion)
    Screenshot 2021-07-28 14:09:11

Module U2NETP doesn't exist. Check if it is installed

Thank you for helping blog, I have issue related to U2NET model conversion.I am using your blog

https://qiita.com/PINTO/items/ed06e03eb5c007c2e102

But while I run this command

python3 /home/user/intel/openvino_2021/deployment_tools/tools/model_downloader/pytorch_to_onnx.py --import-module model.u2net --model-name U2NETP --input-shape 1,3,320,320 --weights saved_models/u2netp/u2netp.pth --output-file u2netp_320x320.onnx --input-names "x" --output-names "a/F.sigmoid(d0)"

I am getting this error
-> I have "model" folder and it contain the 'u2net.py' file .

Module U2NET doesn't exist. Check if it is installed
No module named 'model'

Please guide me about it.Thanks

Accuracy error after converting

Greetings,

I'm trying to convert retinaface with mobilenet0.25 as the backbone and I'm getting error in predictions so I'm wondering is that the same problem as with mobilenet v3 because I saw that you managed to convert retinaface but with resnet50 as the backbone. Any information would be appriciated.

The RegionYolo layer is not yet implemented.

Hi! I'm trying to convert my custom yolov4.weights file to tflite.
Yolov4.weights -> openvino(.bin, .xml) -> tflite.
But this error occured.

"The RegionYolo layer is not yet implemented."

Ekran Görüntüsü (1012)

Can @PINTO0309 help me with my problem? It's relevant my graduate project. So
if you help me with my problem I would be very pleased.

Segmentation map is different after conversion

Hi!
I'm trying to convert BiSeNet from PyTorch to TFLite (PyTorch -> ONNX -> OpenVino -> TF). I followed steps from your article, except conversion to ONNX. For PyTorch -> ONNX conversion I used

torch_out = torch.onnx._export(net, dummy_nhwc, "bisenet.onnx", export_params=True,
                               input_names=['input'],  output_names=['output'], opset_version=11)

When I use pytorch_to_onnx.py from openvino, I get following error:

UserWarning: ONNX export failed on upsample_bilinear2d because align_corners == True not supported
RuntimeError: ONNX export failed: Couldn't export operator aten::upsample_bilinear2d

After conversion to tflite output segmentation map differs from output of pytorch and onnx model.

Output of onnx:
seg_onnx

Output of tflite:
seg_tflite

I tried to convert model with --output_edgetpu True and False (to resize with tf.compat.v1.image.resize_* and with tf.image.resize), but both tflite models produce same results. Any help would be appriciated!

Environment:
tf-nightly 2.5.0.dev20210128
openvino2tensorflow 1.5.8

openvino to tensorflow falis

trying to convert a ir model create with 2019 r2 openvino toolkit version with this command:

openvino2tensorflow --model_path=raftC.xml --output_saved_model True --output_pb True --output_weight_quant_tflite True --output_no_quant_float32_tflite True

output:
import: not authorized os' @ error/constitute.c/WriteImage/1028. import: not authorized sys' @ error/constitute.c/WriteImage/1028.
import: not authorized argparse' @ error/constitute.c/WriteImage/1028. import: not authorized struct' @ error/constitute.c/WriteImage/1028.
import: not authorized np' @ error/constitute.c/WriteImage/1028. import: not authorized et' @ error/constitute.c/WriteImage/1028.
from: can't read /var/mail/openvino.inference_engine
import: not authorized tf' @ error/constitute.c/WriteImage/1028. from: can't read /var/mail/tensorflow.keras from: can't read /var/mail/tensorflow.keras.layers from: can't read /var/mail/tensorflow.keras.initializers from: can't read /var/mail/tensorflow.keras.backend from: can't read /var/mail/tensorflow.keras.activations from: can't read /var/mail/tensorflow.python.framework.convert_to_constants import: not authorized np' @ error/constitute.c/WriteImage/1028.
import: not authorized sys' @ error/constitute.c/WriteImage/1028. import: not authorized tfds' @ error/constitute.c/WriteImage/1028.
/home/user/miniconda3/bin/openvino2tensorflow: line 79: syntax error near unexpected token (' /home/user/miniconda3/bin/openvino2tensorflow: line 79: def convert(model,'

ValueError while converting OSNet Model

Hey @PINTO0309 , thank you so much for the great work. Both this and the Model Zoo have been a huge help.
I have been trying to convert an OSNet person-reid model from ONNX to an EdgeTPU version. I was able to convert the ONNX model to the OpenVINO IR, but it fails with a value error when I try to convert from the IR to a TensorFlow Saved_Model.
ValueError: Output tensors of a Functional model must be the output of a TensorFlow Layer (thus holding past layer metadata). Found: [[[[ 1.22419130e-02 -1.56739808e-03 -3.44633125e ....

Could you please help me out? I have attached the XML and BIN files.
[EDIT] Added the optimized ONNX as well, in case you need to refer to it.
OSNet_Trial_1.zip

Saved_Model conversion for YOLOv4 does not create the variables files

Hey @PINTO0309
I have been trying to convert a pretrained YOLOv4 model from OpenVINO to TF Saved_Model. I earlier used your tool to convert other models like OSNet, which have worked perfectly. However, this time, the Saved_Model folder does not contain any variable files, which then causes an issue when I try to convert the Saved_Model to a TFLite format.
I get the following error. It's quite long and I apologise in advance for that. I have made it shorter by removing the Model description printed.

[setupvars.sh] OpenVINO environment initialized
TensorFlow/Keras model building process starts ======================================
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
.....
__________________________________________________________________________________________________
TensorFlow/Keras model building process complete!
saved_model output started ==========================================================
ERROR: can't pickle module objects
Traceback (most recent call last):
  File "/usr/local/bin/openvino2tensorflow", line 1676, in convert
    tf.saved_model.save(model, model_output_path)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py", line 1033, in save
    obj, signatures, options, meta_graph_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py", line 1198, in _build_meta_graph
    return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py", line 1163, in _build_meta_graph_impl
    asset_info.asset_index)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py", line 755, in _serialize_object_graph
    saveable_view.function_name_map)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py", line 800, in _write_object_proto
    metadata=obj._tracking_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 3079, in _tracking_metadata
    return self._trackable_saved_model_saver.tracking_metadata
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 55, in tracking_metadata
    return json_utils.Encoder().encode(self.python_properties)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 41, in python_properties
    return self._python_properties_internal()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 35, in _python_properties_internal
    metadata = super(ModelSavedModelSaver, self)._python_properties_internal()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 59, in _python_properties_internal
    metadata.update(get_config(self.obj))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 118, in get_config
    config = generic_utils.serialize_keras_object(obj)['config']
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/generic_utils.py", line 245, in serialize_keras_object
    config = instance.get_config()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py", line 650, in get_config
    return copy.deepcopy(get_network_config(self))
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle module objects
Switch to the output of an optimized protocol buffer file (.pb).
.pb output started ==================================================================
.pb output complete! - /content/drive/MyDrive/Clutterbot/YOLO/yolov4_crowdhuman_saved_model/model_float32.pb
WARNING:tensorflow:From /usr/local/bin/openvino2tensorflow:1759: simple_save (from tensorflow.python.saved_model.simple_save) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.simple_save.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
Optimized graph converted to SavedModel! - /content/drive/MyDrive/Clutterbot/YOLO/yolov4_crowdhuman_saved_model
All the conversion process is finished! =============================================

Could you please help me with this?

The XML and BIN files of the OpenVINO IR are here, here

openvino to tflite ValueError

I follow the procedure,but fail at the last step. I get the ValueError: 'F.sigmoid(d0)' is not a valid scope name.
Here is part of my pip list:
onnx 1.8.0
onnx-simplifier 0.2.19
onnxoptimizer 0.1.1
onnxruntime 1.5.2
openvino-python 2021.1
openvino2tensorflow 0.4.7
tensorflow 2.3.1
torch 1.7.0+cu101
torchaudio 0.7.0
torchvision 0.8.1+cu101
I git clone openvino and open_model_zoo from the openvino github and use open_model_zoo/tools/downloader/pytorch_to_onnx.py, openvino/model-optimizer/mo.py in master branch. I get a different output u2netp_320x320_opt.xml but not u2netp_320x320.xml in step 6.8. How to solve the ValueError? Also, I will try to convert more openvino model.

Problems converting person-reidentification model

Hello @PINTO0309,

Recently I have stumbled upon this repo, thanks a lot for the work that you put in the project!

I have successfully converted some openvino models however it seems to fail during the conversion of "person-reidentification-retail" models. So far I have tried several different versions such as:

  • person-reidentification-retail-0288
  • person-reidentification-retail-0287
  • person-reidentification-retail-0270
  • person-reidentification-retail-0248

but unfortunately every attempt failed. Are you planning on adding support for the above mentioned reidentification models?

Thanks again!

ImportError: libinference_engine.so Openvino to Tensorflow

Thanks @PINTO0309 for help . I am running following command and getting the error

openvino2tensorflow --model_path openvino/320x320/FP32/u2net_320x320_opt.xml --model_output_path saved_model_320x320 --output_saved_model True

Traceback (most recent call last):
File "/home/user/.local/bin/openvino2tensorflow", line 66, in
from openvino.inference_engine import IECore
File "/home/naeem/.local/lib/python3.6/site-packages/openvino/inference_engine/init.py", line 1, in
from .ie_api import *
ImportError: libinference_engine.so: cannot open shared object file: No such file or directory

Can you guide me what I am doing wrong.Thanks

Openvino Yolov5 list index out of range

1. Ubuntu 20.04

2. x86_64

3. Version of OpenVINO 2021.3.394

4. Version of TensorFlow v2.4.1

8. Version of ONNX 1.9.0

9. Download URL for ONNX model

https://drive.google.com/file/d/1RSbxNDlCDiaJLCKy_iyg19MOC6W7MIza/view?usp=sharing

10. Download URL for OpenVINO IR (.bin/.xml) model

https://drive.google.com/drive/folders/1Jy4_4wbwJXTNk-Lv1AYFSrN8mh8dWDUE?usp=sharing

11. URL of the repository from which the transformed model was taken

https://github.com/ultralytics/yolov5

13. Issue Details

We're trying to convert and customize a Yolov5 model in Keras since the EdgeTPU has issues with the 5DTranpose operation at the end and the openvino2tensorflow has issues with which dimensions to tranpose between the Openvino graph and Tensorflow.

The idea is to use openvino2tensorflow to get a Keras and then customize the output of that conversion but we get the index out of range error when attempting to load the Keras model

Any suggestions on how to workaround this workaround?

Update: I just found out an error is throw when saving the yolo TF/Keras model.
Can't pickle module objects

[Question] This converter only support for Conv2d?

Hi, thanks for the great work,

Currently I'm trying to convert this model using export method to onnx then to openvino

Then when I run this script

python3 openvino2tensorflow.py \
  --model_path=model.xml \
  --output_saved_model=True

I received

list index out of range
(Pdb) shape
[16, 64, 256]

After take a look at the error line it looks like the expected shape is different size,

Input(shape=(shape[2], shape[3], shape[1]), batch_size=shape[0], name=layer_name)

The only difference I found is that my model using Conv1d which trigger the error

Thanks

Edit
I think I can convert it without this script supporting Conv1d by expanding the dim using this method

But I will go to sleep first, already 2 am here

Error in Conversion /openvino to tensorflow

@PINTO0309 Thanks for nice work . I am trying to convert u2net model to tensorflow Js .I have done all the above given steps but while I run the following command .I am getting following error.

-> Working on Ubuntu operating System
-> Working on CPU Device

$ openvino2tensorflow --model_path openvino/320x320/FP32/u2netp_320x320.xml --model_output_path saved_model_320x320 --output_saved_model True
Traceback (most recent call last):
File "/home/user/anaconda3/bin/openvino2tensorflow", line 66, in
from openvino.inference_engine import IECore
File "/home/user/anaconda3/lib/python3.7/site-packages/openvino/inference_engine/init.py", line 1, in
from .ie_api import *
ImportError: /home/user/anaconda3/bin/../lib/libinference_engine.so: undefined symbol: _ZN3tbb8internal13numa_topology4fillEPi

Yolov5 model conversion error, ValueError: axes don't match array

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc

macOS BigSur 11.4

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

x86_64

3. Version of OpenVINO e.g. 2021.2.185, etc

openvino 2021.3.0
openvino-dev 2021.3.0

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc

tensorflow-cpu 2.3.1

8. Version of ONNX e.g. v1.8.0, etc

onnx 1.9.0

11. URL of the repository from which the transformed model was taken

https://github.com/ultralytics/yolov5

13. Issue Details

Hi! Thanks for this great tool, it's really handy for people playing with multiple frameworks like me! Here's a problem I ran into today and unfortunately I could not figure it by myself :( . I'll really appreciate it if you can help me on this a little.

I first converted an onnx model from YOLOv5 pre-trained checkpoint, then converted it to OpenVINO IR, when I run openvino2tensorflow, bellow error happened:

$ openvino2tensorflow \  
  --model_path tmp/yolov5s.xml \
  --model_output_path ./tmp \
  --output_pb
TensorFlow/Keras model building process starts ======================================
Traceback (most recent call last):
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1812, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 3 and 76 for '{{node Add_56}} = Add[T=DT_FLOAT](Add_55, Add_56/y)' with input shapes: [1,76,3,2,120], [1,1,76,120,2].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/alan/.conda/envs/DL/bin/openvino2tensorflow", line 500, in convert
    tf_layers_dict[layer_id] = tf.math.add(tf_layers_dict[edge_id0], tf_layers_dict[edge_id1])
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 357, in add
    "Add", x=x, y=y, name=name)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 744, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 593, in _create_op_internal
    compute_device)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3485, in _create_op_internal
    op_def=op_def)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1975, in __init__
    control_input_ops, op_def)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1815, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 3 and 76 for '{{node Add_56}} = Add[T=DT_FLOAT](Add_55, Add_56/y)' with input shapes: [1,76,3,2,120], [1,1,76,120,2].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1812, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 3 and 76 for '{{node Add_57}} = Add[T=DT_FLOAT](Add_55, Add_57/y)' with input shapes: [1,76,3,2,120], [1,1,76,120,2].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/alan/.conda/envs/DL/bin/openvino2tensorflow", line 506, in convert
    tf_layers_dict[layer_id] = tf.math.add(tf_layers_dict[edge_id0], tf_layers_dict[edge_id1])
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 357, in add
    "Add", x=x, y=y, name=name)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 744, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 593, in _create_op_internal
    compute_device)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3485, in _create_op_internal
    op_def=op_def)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1975, in __init__
    control_input_ops, op_def)
  File "/Users/alan/.conda/envs/DL/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1815, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 3 and 76 for '{{node Add_57}} = Add[T=DT_FLOAT](Add_55, Add_57/y)' with input shapes: [1,76,3,2,120], [1,1,76,120,2].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/alan/.conda/envs/DL/bin/openvino2tensorflow", line 2898, in <module>
    main()
  File "/Users/alan/.conda/envs/DL/bin/openvino2tensorflow", line 2894, in main
    yolact, restricted_resize_image_mode, weight_replacement_config, debug, debug_layer_number)
  File "/Users/alan/.conda/envs/DL/bin/openvino2tensorflow", line 508, in convert
    tf_layers_dict[layer_id] = tf.math.add(tf_layers_dict[edge_id0], tf_layers_dict[edge_id1].transpose(0,2,3,1))
ValueError: axes don't match array

I have no idea why Add_56 takes such inputs in the error message, as the .xml looks like this:

<layer id="59" name="Add_56" type="Add" version="opset1">
	<data auto_broadcast="numpy"/>
	<input>
		<port id="0">
			<dim>1</dim>
			<dim>32</dim>
			<dim>152</dim>
			<dim>240</dim>
		</port>
		<port id="1">
			<dim>1</dim>
			<dim>32</dim>
			<dim>152</dim>
			<dim>240</dim>
		</port>
	</input>
	<output>
		<port id="2" names="183" precision="FP32">
			<dim>1</dim>
			<dim>32</dim>
			<dim>152</dim>
			<dim>240</dim>
		</port>
	</output>
</layer>

Mentioned model files can be found here: https://static.imalan.cn/share/yolov5s.zip, thanks for your help in advance!

ERROR: can't pickle module objects

@PINTO0309 Thanks for nice work . I have converted the model into Tensorflow Json Format But During Conversion
-> Converting U2net ----> Tensorflow JS
-> openvino to tensorflow
I have get this error in the conversion.Model get converted but face this error .

openvino2tensorflow --model_path openvino/320x320/FP32/u2net_320x320_opt.xml --model_output_path saved_model_320x320 --output_saved_model True

openvino2tensorflow --model_path openvino/320x320/FP32/u2net_320x320_opt.xml --model_output_path saved_model_320x320 --output_saved_model True
TensorFlow/Keras model building process starts ======================================
/home/user/.local/lib/python3.6/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The text.latex.preview rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The mathtext.fallback_to_cm rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: Support for setting the 'mathtext.fallback_to_cm' rcParam is deprecated since 3.3 and will be removed two minor releases later; use 'mathtext.fallback : 'cm' instead.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The validate_bool_maybe_none function was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The savefig.jpeg_quality rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The keymap.all_axes rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The animation.avconv_path rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
In /home/user/.local/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The animation.avconv_args rcparam was deprecated in Matplotlib 3.3 and will be removed two minor releases later.
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
x (InputLayer)                  [(1, 320, 320, 3)]   0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (1, 320, 320, 64)    1728        x[0][0]                          
__________________________________________________________________________________________________
tf_op_layer_Add (TensorFlowOpLa [(1, 320, 320, 64)]  0           conv2d[0][0]                     
__________________________________________________________________________________________________
tf_op_layer_Relu (TensorFlowOpL [(1, 320, 320, 64)]  0           tf_op_layer_Add[0][0]            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (1, 320, 320, 32)    18432       tf_op_layer_Relu[0][0]           
__________________________________________________________________________________________________
tf_op_layer_Add_1 (TensorFlowOp [(1, 320, 320, 32)]  0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_1 (TensorFlowO [(1, 320, 320, 32)]  0           tf_op_layer_Add_1[0][0]          
__________________________________________________________________________________________________
tf_op_layer_MaxPool (TensorFlow [(1, 160, 160, 32)]  0           tf_op_layer_Relu_1[0][0]         
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (1, 160, 160, 32)    9216        tf_op_layer_MaxPool[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_2 (TensorFlowOp [(1, 160, 160, 32)]  0           conv2d_2[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_2 (TensorFlowO [(1, 160, 160, 32)]  0           tf_op_layer_Add_2[0][0]          
__________________________________________________________________________________________________
tf_op_layer_MaxPool_1 (TensorFl [(1, 80, 80, 32)]    0           tf_op_layer_Relu_2[0][0]         
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (1, 80, 80, 32)      9216        tf_op_layer_MaxPool_1[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_3 (TensorFlowOp [(1, 80, 80, 32)]    0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_3 (TensorFlowO [(1, 80, 80, 32)]    0           tf_op_layer_Add_3[0][0]          
__________________________________________________________________________________________________
tf_op_layer_MaxPool_2 (TensorFl [(1, 40, 40, 32)]    0           tf_op_layer_Relu_3[0][0]         
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (1, 40, 40, 32)      9216        tf_op_layer_MaxPool_2[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_4 (TensorFlowOp [(1, 40, 40, 32)]    0           conv2d_4[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_4 (TensorFlowO [(1, 40, 40, 32)]    0           tf_op_layer_Add_4[0][0]          
__________________________________________________________________________________________________
tf_op_layer_MaxPool_3 (TensorFl [(1, 20, 20, 32)]    0           tf_op_layer_Relu_4[0][0]         
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (1, 20, 20, 32)      9216        tf_op_layer_MaxPool_3[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_5 (TensorFlowOp [(1, 20, 20, 32)]    0           conv2d_5[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_5 (TensorFlowO [(1, 20, 20, 32)]    0           tf_op_layer_Add_5[0][0]          
__________________________________________________________________________________________________
tf_op_layer_MaxPool_4 (TensorFl [(1, 10, 10, 32)]    0           tf_op_layer_Relu_5[0][0]         
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (1, 10, 10, 32)      9216        tf_op_layer_MaxPool_4[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_6 (TensorFlowOp [(1, 10, 10, 32)]    0           conv2d_6[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_6 (TensorFlowO [(1, 10, 10, 32)]    0           tf_op_layer_Add_6[0][0]          
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (1, 10, 10, 32)      9216        tf_op_layer_Relu_6[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_7 (TensorFlowOp [(1, 10, 10, 32)]    0           conv2d_7[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_7 (TensorFlowO [(1, 10, 10, 32)]    0           tf_op_layer_Add_7[0][0]          
__________________________________________________________________________________________________
tf_op_layer_concat (TensorFlowO [(1, 10, 10, 64)]    0           tf_op_layer_Relu_7[0][0]         
                                                                 tf_op_layer_Relu_6[0][0]         
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (1, 10, 10, 32)      18432       tf_op_layer_concat[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_8 (TensorFlowOp [(1, 10, 10, 32)]    0           conv2d_8[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_8 (TensorFlowO [(1, 10, 10, 32)]    0           tf_op_layer_Add_8[0][0]          
__________________________________________________________________________________________________
lambda (Lambda)                 (1, 20, 20, 32)      0           tf_op_layer_Relu_8[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_1 (TensorFlo [(1, 20, 20, 64)]    0           lambda[0][0]                     
                                                                 tf_op_layer_Relu_5[0][0]         
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (1, 20, 20, 32)      18432       tf_op_layer_concat_1[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_10 (TensorFlowO [(1, 20, 20, 32)]    0           conv2d_9[0][0]                   
__________________________________________________________________________________________________
tf_op_layer_Relu_9 (TensorFlowO [(1, 20, 20, 32)]    0           tf_op_layer_Add_10[0][0]         
__________________________________________________________________________________________________
lambda_1 (Lambda)               (1, 40, 40, 32)      0           tf_op_layer_Relu_9[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_2 (TensorFlo [(1, 40, 40, 64)]    0           lambda_1[0][0]                   
                                                                 tf_op_layer_Relu_4[0][0]         
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (1, 40, 40, 32)      18432       tf_op_layer_concat_2[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_12 (TensorFlowO [(1, 40, 40, 32)]    0           conv2d_10[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_10 (TensorFlow [(1, 40, 40, 32)]    0           tf_op_layer_Add_12[0][0]         
__________________________________________________________________________________________________
lambda_2 (Lambda)               (1, 80, 80, 32)      0           tf_op_layer_Relu_10[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_3 (TensorFlo [(1, 80, 80, 64)]    0           lambda_2[0][0]                   
                                                                 tf_op_layer_Relu_3[0][0]         
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (1, 80, 80, 32)      18432       tf_op_layer_concat_3[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_14 (TensorFlowO [(1, 80, 80, 32)]    0           conv2d_11[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_11 (TensorFlow [(1, 80, 80, 32)]    0           tf_op_layer_Add_14[0][0]         
__________________________________________________________________________________________________
lambda_3 (Lambda)               (1, 160, 160, 32)    0           tf_op_layer_Relu_11[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_4 (TensorFlo [(1, 160, 160, 64)]  0           lambda_3[0][0]                   
                                                                 tf_op_layer_Relu_2[0][0]         
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (1, 160, 160, 32)    18432       tf_op_layer_concat_4[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_16 (TensorFlowO [(1, 160, 160, 32)]  0           conv2d_12[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_12 (TensorFlow [(1, 160, 160, 32)]  0           tf_op_layer_Add_16[0][0]         
__________________________________________________________________________________________________
lambda_4 (Lambda)               (1, 320, 320, 32)    0           tf_op_layer_Relu_12[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_5 (TensorFlo [(1, 320, 320, 64)]  0           lambda_4[0][0]                   
                                                                 tf_op_layer_Relu_1[0][0]         
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (1, 320, 320, 64)    36864       tf_op_layer_concat_5[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_18 (TensorFlowO [(1, 320, 320, 64)]  0           conv2d_13[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_13 (TensorFlow [(1, 320, 320, 64)]  0           tf_op_layer_Add_18[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_19 (TensorFlowO [(1, 320, 320, 64)]  0           tf_op_layer_Relu_13[0][0]        
                                                                 tf_op_layer_Relu[0][0]           
__________________________________________________________________________________________________
tf_op_layer_MaxPool_5 (TensorFl [(1, 160, 160, 64)]  0           tf_op_layer_Add_19[0][0]         
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (1, 160, 160, 128)   73728       tf_op_layer_MaxPool_5[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_20 (TensorFlowO [(1, 160, 160, 128)] 0           conv2d_14[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_14 (TensorFlow [(1, 160, 160, 128)] 0           tf_op_layer_Add_20[0][0]         
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (1, 160, 160, 32)    36864       tf_op_layer_Relu_14[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_21 (TensorFlowO [(1, 160, 160, 32)]  0           conv2d_15[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_15 (TensorFlow [(1, 160, 160, 32)]  0           tf_op_layer_Add_21[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_6 (TensorFl [(1, 80, 80, 32)]    0           tf_op_layer_Relu_15[0][0]        
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (1, 80, 80, 32)      9216        tf_op_layer_MaxPool_6[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_22 (TensorFlowO [(1, 80, 80, 32)]    0           conv2d_16[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_16 (TensorFlow [(1, 80, 80, 32)]    0           tf_op_layer_Add_22[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_7 (TensorFl [(1, 40, 40, 32)]    0           tf_op_layer_Relu_16[0][0]        
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (1, 40, 40, 32)      9216        tf_op_layer_MaxPool_7[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_23 (TensorFlowO [(1, 40, 40, 32)]    0           conv2d_17[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_17 (TensorFlow [(1, 40, 40, 32)]    0           tf_op_layer_Add_23[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_8 (TensorFl [(1, 20, 20, 32)]    0           tf_op_layer_Relu_17[0][0]        
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (1, 20, 20, 32)      9216        tf_op_layer_MaxPool_8[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_24 (TensorFlowO [(1, 20, 20, 32)]    0           conv2d_18[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_18 (TensorFlow [(1, 20, 20, 32)]    0           tf_op_layer_Add_24[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_9 (TensorFl [(1, 10, 10, 32)]    0           tf_op_layer_Relu_18[0][0]        
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (1, 10, 10, 32)      9216        tf_op_layer_MaxPool_9[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_25 (TensorFlowO [(1, 10, 10, 32)]    0           conv2d_19[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_19 (TensorFlow [(1, 10, 10, 32)]    0           tf_op_layer_Add_25[0][0]         
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (1, 10, 10, 32)      9216        tf_op_layer_Relu_19[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_26 (TensorFlowO [(1, 10, 10, 32)]    0           conv2d_20[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_20 (TensorFlow [(1, 10, 10, 32)]    0           tf_op_layer_Add_26[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_6 (TensorFlo [(1, 10, 10, 64)]    0           tf_op_layer_Relu_20[0][0]        
                                                                 tf_op_layer_Relu_19[0][0]        
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (1, 10, 10, 32)      18432       tf_op_layer_concat_6[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_27 (TensorFlowO [(1, 10, 10, 32)]    0           conv2d_21[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_21 (TensorFlow [(1, 10, 10, 32)]    0           tf_op_layer_Add_27[0][0]         
__________________________________________________________________________________________________
lambda_5 (Lambda)               (1, 20, 20, 32)      0           tf_op_layer_Relu_21[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_7 (TensorFlo [(1, 20, 20, 64)]    0           lambda_5[0][0]                   
                                                                 tf_op_layer_Relu_18[0][0]        
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (1, 20, 20, 32)      18432       tf_op_layer_concat_7[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_29 (TensorFlowO [(1, 20, 20, 32)]    0           conv2d_22[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_22 (TensorFlow [(1, 20, 20, 32)]    0           tf_op_layer_Add_29[0][0]         
__________________________________________________________________________________________________
lambda_6 (Lambda)               (1, 40, 40, 32)      0           tf_op_layer_Relu_22[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_8 (TensorFlo [(1, 40, 40, 64)]    0           lambda_6[0][0]                   
                                                                 tf_op_layer_Relu_17[0][0]        
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (1, 40, 40, 32)      18432       tf_op_layer_concat_8[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_31 (TensorFlowO [(1, 40, 40, 32)]    0           conv2d_23[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_23 (TensorFlow [(1, 40, 40, 32)]    0           tf_op_layer_Add_31[0][0]         
__________________________________________________________________________________________________
lambda_7 (Lambda)               (1, 80, 80, 32)      0           tf_op_layer_Relu_23[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_9 (TensorFlo [(1, 80, 80, 64)]    0           lambda_7[0][0]                   
                                                                 tf_op_layer_Relu_16[0][0]        
__________________________________________________________________________________________________
conv2d_24 (Conv2D)              (1, 80, 80, 32)      18432       tf_op_layer_concat_9[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_33 (TensorFlowO [(1, 80, 80, 32)]    0           conv2d_24[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_24 (TensorFlow [(1, 80, 80, 32)]    0           tf_op_layer_Add_33[0][0]         
__________________________________________________________________________________________________
lambda_8 (Lambda)               (1, 160, 160, 32)    0           tf_op_layer_Relu_24[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_10 (TensorFl [(1, 160, 160, 64)]  0           lambda_8[0][0]                   
                                                                 tf_op_layer_Relu_15[0][0]        
__________________________________________________________________________________________________
conv2d_25 (Conv2D)              (1, 160, 160, 128)   73728       tf_op_layer_concat_10[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_35 (TensorFlowO [(1, 160, 160, 128)] 0           conv2d_25[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_25 (TensorFlow [(1, 160, 160, 128)] 0           tf_op_layer_Add_35[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_36 (TensorFlowO [(1, 160, 160, 128)] 0           tf_op_layer_Relu_25[0][0]        
                                                                 tf_op_layer_Relu_14[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_10 (TensorF [(1, 80, 80, 128)]   0           tf_op_layer_Add_36[0][0]         
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (1, 80, 80, 256)     294912      tf_op_layer_MaxPool_10[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_37 (TensorFlowO [(1, 80, 80, 256)]   0           conv2d_26[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_26 (TensorFlow [(1, 80, 80, 256)]   0           tf_op_layer_Add_37[0][0]         
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (1, 80, 80, 64)      147456      tf_op_layer_Relu_26[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_38 (TensorFlowO [(1, 80, 80, 64)]    0           conv2d_27[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_27 (TensorFlow [(1, 80, 80, 64)]    0           tf_op_layer_Add_38[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_11 (TensorF [(1, 40, 40, 64)]    0           tf_op_layer_Relu_27[0][0]        
__________________________________________________________________________________________________
conv2d_28 (Conv2D)              (1, 40, 40, 64)      36864       tf_op_layer_MaxPool_11[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_39 (TensorFlowO [(1, 40, 40, 64)]    0           conv2d_28[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_28 (TensorFlow [(1, 40, 40, 64)]    0           tf_op_layer_Add_39[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_12 (TensorF [(1, 20, 20, 64)]    0           tf_op_layer_Relu_28[0][0]        
__________________________________________________________________________________________________
conv2d_29 (Conv2D)              (1, 20, 20, 64)      36864       tf_op_layer_MaxPool_12[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_40 (TensorFlowO [(1, 20, 20, 64)]    0           conv2d_29[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_29 (TensorFlow [(1, 20, 20, 64)]    0           tf_op_layer_Add_40[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_13 (TensorF [(1, 10, 10, 64)]    0           tf_op_layer_Relu_29[0][0]        
__________________________________________________________________________________________________
conv2d_30 (Conv2D)              (1, 10, 10, 64)      36864       tf_op_layer_MaxPool_13[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_41 (TensorFlowO [(1, 10, 10, 64)]    0           conv2d_30[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_30 (TensorFlow [(1, 10, 10, 64)]    0           tf_op_layer_Add_41[0][0]         
__________________________________________________________________________________________________
conv2d_31 (Conv2D)              (1, 10, 10, 64)      36864       tf_op_layer_Relu_30[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_42 (TensorFlowO [(1, 10, 10, 64)]    0           conv2d_31[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_31 (TensorFlow [(1, 10, 10, 64)]    0           tf_op_layer_Add_42[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_11 (TensorFl [(1, 10, 10, 128)]   0           tf_op_layer_Relu_31[0][0]        
                                                                 tf_op_layer_Relu_30[0][0]        
__________________________________________________________________________________________________
conv2d_32 (Conv2D)              (1, 10, 10, 64)      73728       tf_op_layer_concat_11[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_43 (TensorFlowO [(1, 10, 10, 64)]    0           conv2d_32[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_32 (TensorFlow [(1, 10, 10, 64)]    0           tf_op_layer_Add_43[0][0]         
__________________________________________________________________________________________________
lambda_9 (Lambda)               (1, 20, 20, 64)      0           tf_op_layer_Relu_32[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_12 (TensorFl [(1, 20, 20, 128)]   0           lambda_9[0][0]                   
                                                                 tf_op_layer_Relu_29[0][0]        
__________________________________________________________________________________________________
conv2d_33 (Conv2D)              (1, 20, 20, 64)      73728       tf_op_layer_concat_12[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_45 (TensorFlowO [(1, 20, 20, 64)]    0           conv2d_33[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_33 (TensorFlow [(1, 20, 20, 64)]    0           tf_op_layer_Add_45[0][0]         
__________________________________________________________________________________________________
lambda_10 (Lambda)              (1, 40, 40, 64)      0           tf_op_layer_Relu_33[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_13 (TensorFl [(1, 40, 40, 128)]   0           lambda_10[0][0]                  
                                                                 tf_op_layer_Relu_28[0][0]        
__________________________________________________________________________________________________
conv2d_34 (Conv2D)              (1, 40, 40, 64)      73728       tf_op_layer_concat_13[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_47 (TensorFlowO [(1, 40, 40, 64)]    0           conv2d_34[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_34 (TensorFlow [(1, 40, 40, 64)]    0           tf_op_layer_Add_47[0][0]         
__________________________________________________________________________________________________
lambda_11 (Lambda)              (1, 80, 80, 64)      0           tf_op_layer_Relu_34[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_14 (TensorFl [(1, 80, 80, 128)]   0           lambda_11[0][0]                  
                                                                 tf_op_layer_Relu_27[0][0]        
__________________________________________________________________________________________________
conv2d_35 (Conv2D)              (1, 80, 80, 256)     294912      tf_op_layer_concat_14[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_49 (TensorFlowO [(1, 80, 80, 256)]   0           conv2d_35[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_35 (TensorFlow [(1, 80, 80, 256)]   0           tf_op_layer_Add_49[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_50 (TensorFlowO [(1, 80, 80, 256)]   0           tf_op_layer_Relu_35[0][0]        
                                                                 tf_op_layer_Relu_26[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_14 (TensorF [(1, 40, 40, 256)]   0           tf_op_layer_Add_50[0][0]         
__________________________________________________________________________________________________
conv2d_36 (Conv2D)              (1, 40, 40, 512)     1179648     tf_op_layer_MaxPool_14[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_51 (TensorFlowO [(1, 40, 40, 512)]   0           conv2d_36[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_36 (TensorFlow [(1, 40, 40, 512)]   0           tf_op_layer_Add_51[0][0]         
__________________________________________________________________________________________________
conv2d_37 (Conv2D)              (1, 40, 40, 128)     589824      tf_op_layer_Relu_36[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_52 (TensorFlowO [(1, 40, 40, 128)]   0           conv2d_37[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_37 (TensorFlow [(1, 40, 40, 128)]   0           tf_op_layer_Add_52[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_15 (TensorF [(1, 20, 20, 128)]   0           tf_op_layer_Relu_37[0][0]        
__________________________________________________________________________________________________
conv2d_38 (Conv2D)              (1, 20, 20, 128)     147456      tf_op_layer_MaxPool_15[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_53 (TensorFlowO [(1, 20, 20, 128)]   0           conv2d_38[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_38 (TensorFlow [(1, 20, 20, 128)]   0           tf_op_layer_Add_53[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_16 (TensorF [(1, 10, 10, 128)]   0           tf_op_layer_Relu_38[0][0]        
__________________________________________________________________________________________________
conv2d_39 (Conv2D)              (1, 10, 10, 128)     147456      tf_op_layer_MaxPool_16[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_54 (TensorFlowO [(1, 10, 10, 128)]   0           conv2d_39[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_39 (TensorFlow [(1, 10, 10, 128)]   0           tf_op_layer_Add_54[0][0]         
__________________________________________________________________________________________________
conv2d_40 (Conv2D)              (1, 10, 10, 128)     147456      tf_op_layer_Relu_39[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_55 (TensorFlowO [(1, 10, 10, 128)]   0           conv2d_40[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_40 (TensorFlow [(1, 10, 10, 128)]   0           tf_op_layer_Add_55[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_15 (TensorFl [(1, 10, 10, 256)]   0           tf_op_layer_Relu_40[0][0]        
                                                                 tf_op_layer_Relu_39[0][0]        
__________________________________________________________________________________________________
conv2d_41 (Conv2D)              (1, 10, 10, 128)     294912      tf_op_layer_concat_15[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_56 (TensorFlowO [(1, 10, 10, 128)]   0           conv2d_41[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_41 (TensorFlow [(1, 10, 10, 128)]   0           tf_op_layer_Add_56[0][0]         
__________________________________________________________________________________________________
lambda_12 (Lambda)              (1, 20, 20, 128)     0           tf_op_layer_Relu_41[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_16 (TensorFl [(1, 20, 20, 256)]   0           lambda_12[0][0]                  
                                                                 tf_op_layer_Relu_38[0][0]        
__________________________________________________________________________________________________
conv2d_42 (Conv2D)              (1, 20, 20, 128)     294912      tf_op_layer_concat_16[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_58 (TensorFlowO [(1, 20, 20, 128)]   0           conv2d_42[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_42 (TensorFlow [(1, 20, 20, 128)]   0           tf_op_layer_Add_58[0][0]         
__________________________________________________________________________________________________
lambda_13 (Lambda)              (1, 40, 40, 128)     0           tf_op_layer_Relu_42[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_17 (TensorFl [(1, 40, 40, 256)]   0           lambda_13[0][0]                  
                                                                 tf_op_layer_Relu_37[0][0]        
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (1, 40, 40, 512)     1179648     tf_op_layer_concat_17[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_60 (TensorFlowO [(1, 40, 40, 512)]   0           conv2d_43[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_43 (TensorFlow [(1, 40, 40, 512)]   0           tf_op_layer_Add_60[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_61 (TensorFlowO [(1, 40, 40, 512)]   0           tf_op_layer_Relu_43[0][0]        
                                                                 tf_op_layer_Relu_36[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_17 (TensorF [(1, 20, 20, 512)]   0           tf_op_layer_Add_61[0][0]         
__________________________________________________________________________________________________
conv2d_44 (Conv2D)              (1, 20, 20, 512)     2359296     tf_op_layer_MaxPool_17[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_62 (TensorFlowO [(1, 20, 20, 512)]   0           conv2d_44[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_44 (TensorFlow [(1, 20, 20, 512)]   0           tf_op_layer_Add_62[0][0]         
__________________________________________________________________________________________________
conv2d_45 (Conv2D)              (1, 20, 20, 256)     1179648     tf_op_layer_Relu_44[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_63 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_45[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_45 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_63[0][0]         
__________________________________________________________________________________________________
conv2d_46 (Conv2D)              (1, 20, 20, 256)     589824      tf_op_layer_Relu_45[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_64 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_46[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_46 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_64[0][0]         
__________________________________________________________________________________________________
conv2d_47 (Conv2D)              (1, 20, 20, 256)     589824      tf_op_layer_Relu_46[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_65 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_47[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_47 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_65[0][0]         
__________________________________________________________________________________________________
conv2d_48 (Conv2D)              (1, 20, 20, 256)     589824      tf_op_layer_Relu_47[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_66 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_48[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_48 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_66[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_18 (TensorFl [(1, 20, 20, 512)]   0           tf_op_layer_Relu_48[0][0]        
                                                                 tf_op_layer_Relu_47[0][0]        
__________________________________________________________________________________________________
conv2d_49 (Conv2D)              (1, 20, 20, 256)     1179648     tf_op_layer_concat_18[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_67 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_49[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_49 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_67[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_19 (TensorFl [(1, 20, 20, 512)]   0           tf_op_layer_Relu_49[0][0]        
                                                                 tf_op_layer_Relu_46[0][0]        
__________________________________________________________________________________________________
conv2d_50 (Conv2D)              (1, 20, 20, 256)     1179648     tf_op_layer_concat_19[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_68 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_50[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_50 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_68[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_20 (TensorFl [(1, 20, 20, 512)]   0           tf_op_layer_Relu_50[0][0]        
                                                                 tf_op_layer_Relu_45[0][0]        
__________________________________________________________________________________________________
conv2d_51 (Conv2D)              (1, 20, 20, 512)     2359296     tf_op_layer_concat_20[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_69 (TensorFlowO [(1, 20, 20, 512)]   0           conv2d_51[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_51 (TensorFlow [(1, 20, 20, 512)]   0           tf_op_layer_Add_69[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_70 (TensorFlowO [(1, 20, 20, 512)]   0           tf_op_layer_Relu_51[0][0]        
                                                                 tf_op_layer_Relu_44[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_18 (TensorF [(1, 10, 10, 512)]   0           tf_op_layer_Add_70[0][0]         
__________________________________________________________________________________________________
conv2d_52 (Conv2D)              (1, 10, 10, 512)     2359296     tf_op_layer_MaxPool_18[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_71 (TensorFlowO [(1, 10, 10, 512)]   0           conv2d_52[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_52 (TensorFlow [(1, 10, 10, 512)]   0           tf_op_layer_Add_71[0][0]         
__________________________________________________________________________________________________
conv2d_53 (Conv2D)              (1, 10, 10, 256)     1179648     tf_op_layer_Relu_52[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_72 (TensorFlowO [(1, 10, 10, 256)]   0           conv2d_53[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_53 (TensorFlow [(1, 10, 10, 256)]   0           tf_op_layer_Add_72[0][0]         
__________________________________________________________________________________________________
conv2d_54 (Conv2D)              (1, 10, 10, 256)     589824      tf_op_layer_Relu_53[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_73 (TensorFlowO [(1, 10, 10, 256)]   0           conv2d_54[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_54 (TensorFlow [(1, 10, 10, 256)]   0           tf_op_layer_Add_73[0][0]         
__________________________________________________________________________________________________
conv2d_55 (Conv2D)              (1, 10, 10, 256)     589824      tf_op_layer_Relu_54[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_74 (TensorFlowO [(1, 10, 10, 256)]   0           conv2d_55[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_55 (TensorFlow [(1, 10, 10, 256)]   0           tf_op_layer_Add_74[0][0]         
__________________________________________________________________________________________________
conv2d_56 (Conv2D)              (1, 10, 10, 256)     589824      tf_op_layer_Relu_55[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_75 (TensorFlowO [(1, 10, 10, 256)]   0           conv2d_56[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_56 (TensorFlow [(1, 10, 10, 256)]   0           tf_op_layer_Add_75[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_21 (TensorFl [(1, 10, 10, 512)]   0           tf_op_layer_Relu_56[0][0]        
                                                                 tf_op_layer_Relu_55[0][0]        
__________________________________________________________________________________________________
conv2d_57 (Conv2D)              (1, 10, 10, 256)     1179648     tf_op_layer_concat_21[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_76 (TensorFlowO [(1, 10, 10, 256)]   0           conv2d_57[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_57 (TensorFlow [(1, 10, 10, 256)]   0           tf_op_layer_Add_76[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_22 (TensorFl [(1, 10, 10, 512)]   0           tf_op_layer_Relu_57[0][0]        
                                                                 tf_op_layer_Relu_54[0][0]        
__________________________________________________________________________________________________
conv2d_58 (Conv2D)              (1, 10, 10, 256)     1179648     tf_op_layer_concat_22[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_77 (TensorFlowO [(1, 10, 10, 256)]   0           conv2d_58[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_58 (TensorFlow [(1, 10, 10, 256)]   0           tf_op_layer_Add_77[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_23 (TensorFl [(1, 10, 10, 512)]   0           tf_op_layer_Relu_58[0][0]        
                                                                 tf_op_layer_Relu_53[0][0]        
__________________________________________________________________________________________________
conv2d_59 (Conv2D)              (1, 10, 10, 512)     2359296     tf_op_layer_concat_23[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_78 (TensorFlowO [(1, 10, 10, 512)]   0           conv2d_59[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_59 (TensorFlow [(1, 10, 10, 512)]   0           tf_op_layer_Add_78[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_79 (TensorFlowO [(1, 10, 10, 512)]   0           tf_op_layer_Relu_59[0][0]        
                                                                 tf_op_layer_Relu_52[0][0]        
__________________________________________________________________________________________________
lambda_14 (Lambda)              (1, 20, 20, 512)     0           tf_op_layer_Add_79[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_24 (TensorFl [(1, 20, 20, 1024)]  0           lambda_14[0][0]                  
                                                                 tf_op_layer_Add_70[0][0]         
__________________________________________________________________________________________________
conv2d_60 (Conv2D)              (1, 20, 20, 512)     4718592     tf_op_layer_concat_24[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_81 (TensorFlowO [(1, 20, 20, 512)]   0           conv2d_60[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_60 (TensorFlow [(1, 20, 20, 512)]   0           tf_op_layer_Add_81[0][0]         
__________________________________________________________________________________________________
conv2d_61 (Conv2D)              (1, 20, 20, 256)     1179648     tf_op_layer_Relu_60[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_82 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_61[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_61 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_82[0][0]         
__________________________________________________________________________________________________
conv2d_62 (Conv2D)              (1, 20, 20, 256)     589824      tf_op_layer_Relu_61[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_83 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_62[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_62 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_83[0][0]         
__________________________________________________________________________________________________
conv2d_63 (Conv2D)              (1, 20, 20, 256)     589824      tf_op_layer_Relu_62[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_84 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_63[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_63 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_84[0][0]         
__________________________________________________________________________________________________
conv2d_64 (Conv2D)              (1, 20, 20, 256)     589824      tf_op_layer_Relu_63[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_85 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_64[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_64 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_85[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_25 (TensorFl [(1, 20, 20, 512)]   0           tf_op_layer_Relu_64[0][0]        
                                                                 tf_op_layer_Relu_63[0][0]        
__________________________________________________________________________________________________
conv2d_65 (Conv2D)              (1, 20, 20, 256)     1179648     tf_op_layer_concat_25[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_86 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_65[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_65 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_86[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_26 (TensorFl [(1, 20, 20, 512)]   0           tf_op_layer_Relu_65[0][0]        
                                                                 tf_op_layer_Relu_62[0][0]        
__________________________________________________________________________________________________
conv2d_66 (Conv2D)              (1, 20, 20, 256)     1179648     tf_op_layer_concat_26[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_87 (TensorFlowO [(1, 20, 20, 256)]   0           conv2d_66[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_66 (TensorFlow [(1, 20, 20, 256)]   0           tf_op_layer_Add_87[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_27 (TensorFl [(1, 20, 20, 512)]   0           tf_op_layer_Relu_66[0][0]        
                                                                 tf_op_layer_Relu_61[0][0]        
__________________________________________________________________________________________________
conv2d_67 (Conv2D)              (1, 20, 20, 512)     2359296     tf_op_layer_concat_27[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_88 (TensorFlowO [(1, 20, 20, 512)]   0           conv2d_67[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_67 (TensorFlow [(1, 20, 20, 512)]   0           tf_op_layer_Add_88[0][0]         
__________________________________________________________________________________________________
tf_op_layer_Add_89 (TensorFlowO [(1, 20, 20, 512)]   0           tf_op_layer_Relu_67[0][0]        
                                                                 tf_op_layer_Relu_60[0][0]        
__________________________________________________________________________________________________
lambda_15 (Lambda)              (1, 40, 40, 512)     0           tf_op_layer_Add_89[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_28 (TensorFl [(1, 40, 40, 1024)]  0           lambda_15[0][0]                  
                                                                 tf_op_layer_Add_61[0][0]         
__________________________________________________________________________________________________
conv2d_68 (Conv2D)              (1, 40, 40, 256)     2359296     tf_op_layer_concat_28[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_91 (TensorFlowO [(1, 40, 40, 256)]   0           conv2d_68[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_68 (TensorFlow [(1, 40, 40, 256)]   0           tf_op_layer_Add_91[0][0]         
__________________________________________________________________________________________________
conv2d_69 (Conv2D)              (1, 40, 40, 128)     294912      tf_op_layer_Relu_68[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_92 (TensorFlowO [(1, 40, 40, 128)]   0           conv2d_69[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_69 (TensorFlow [(1, 40, 40, 128)]   0           tf_op_layer_Add_92[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_19 (TensorF [(1, 20, 20, 128)]   0           tf_op_layer_Relu_69[0][0]        
__________________________________________________________________________________________________
conv2d_70 (Conv2D)              (1, 20, 20, 128)     147456      tf_op_layer_MaxPool_19[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_93 (TensorFlowO [(1, 20, 20, 128)]   0           conv2d_70[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_70 (TensorFlow [(1, 20, 20, 128)]   0           tf_op_layer_Add_93[0][0]         
__________________________________________________________________________________________________
tf_op_layer_MaxPool_20 (TensorF [(1, 10, 10, 128)]   0           tf_op_layer_Relu_70[0][0]        
__________________________________________________________________________________________________
conv2d_71 (Conv2D)              (1, 10, 10, 128)     147456      tf_op_layer_MaxPool_20[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_94 (TensorFlowO [(1, 10, 10, 128)]   0           conv2d_71[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_71 (TensorFlow [(1, 10, 10, 128)]   0           tf_op_layer_Add_94[0][0]         
__________________________________________________________________________________________________
conv2d_72 (Conv2D)              (1, 10, 10, 128)     147456      tf_op_layer_Relu_71[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_95 (TensorFlowO [(1, 10, 10, 128)]   0           conv2d_72[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_72 (TensorFlow [(1, 10, 10, 128)]   0           tf_op_layer_Add_95[0][0]         
__________________________________________________________________________________________________
tf_op_layer_concat_29 (TensorFl [(1, 10, 10, 256)]   0           tf_op_layer_Relu_72[0][0]        
                                                                 tf_op_layer_Relu_71[0][0]        
__________________________________________________________________________________________________
conv2d_73 (Conv2D)              (1, 10, 10, 128)     294912      tf_op_layer_concat_29[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_96 (TensorFlowO [(1, 10, 10, 128)]   0           conv2d_73[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_73 (TensorFlow [(1, 10, 10, 128)]   0           tf_op_layer_Add_96[0][0]         
__________________________________________________________________________________________________
lambda_16 (Lambda)              (1, 20, 20, 128)     0           tf_op_layer_Relu_73[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_30 (TensorFl [(1, 20, 20, 256)]   0           lambda_16[0][0]                  
                                                                 tf_op_layer_Relu_70[0][0]        
__________________________________________________________________________________________________
conv2d_74 (Conv2D)              (1, 20, 20, 128)     294912      tf_op_layer_concat_30[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_98 (TensorFlowO [(1, 20, 20, 128)]   0           conv2d_74[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_74 (TensorFlow [(1, 20, 20, 128)]   0           tf_op_layer_Add_98[0][0]         
__________________________________________________________________________________________________
lambda_17 (Lambda)              (1, 40, 40, 128)     0           tf_op_layer_Relu_74[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_31 (TensorFl [(1, 40, 40, 256)]   0           lambda_17[0][0]                  
                                                                 tf_op_layer_Relu_69[0][0]        
__________________________________________________________________________________________________
conv2d_75 (Conv2D)              (1, 40, 40, 256)     589824      tf_op_layer_concat_31[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_100 (TensorFlow [(1, 40, 40, 256)]   0           conv2d_75[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_75 (TensorFlow [(1, 40, 40, 256)]   0           tf_op_layer_Add_100[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_101 (TensorFlow [(1, 40, 40, 256)]   0           tf_op_layer_Relu_75[0][0]        
                                                                 tf_op_layer_Relu_68[0][0]        
__________________________________________________________________________________________________
lambda_18 (Lambda)              (1, 80, 80, 256)     0           tf_op_layer_Add_101[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_32 (TensorFl [(1, 80, 80, 512)]   0           lambda_18[0][0]                  
                                                                 tf_op_layer_Add_50[0][0]         
__________________________________________________________________________________________________
conv2d_76 (Conv2D)              (1, 80, 80, 128)     589824      tf_op_layer_concat_32[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_103 (TensorFlow [(1, 80, 80, 128)]   0           conv2d_76[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_76 (TensorFlow [(1, 80, 80, 128)]   0           tf_op_layer_Add_103[0][0]        
__________________________________________________________________________________________________
conv2d_77 (Conv2D)              (1, 80, 80, 64)      73728       tf_op_layer_Relu_76[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_104 (TensorFlow [(1, 80, 80, 64)]    0           conv2d_77[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_77 (TensorFlow [(1, 80, 80, 64)]    0           tf_op_layer_Add_104[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_21 (TensorF [(1, 40, 40, 64)]    0           tf_op_layer_Relu_77[0][0]        
__________________________________________________________________________________________________
conv2d_78 (Conv2D)              (1, 40, 40, 64)      36864       tf_op_layer_MaxPool_21[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_105 (TensorFlow [(1, 40, 40, 64)]    0           conv2d_78[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_78 (TensorFlow [(1, 40, 40, 64)]    0           tf_op_layer_Add_105[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_22 (TensorF [(1, 20, 20, 64)]    0           tf_op_layer_Relu_78[0][0]        
__________________________________________________________________________________________________
conv2d_79 (Conv2D)              (1, 20, 20, 64)      36864       tf_op_layer_MaxPool_22[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_106 (TensorFlow [(1, 20, 20, 64)]    0           conv2d_79[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_79 (TensorFlow [(1, 20, 20, 64)]    0           tf_op_layer_Add_106[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_23 (TensorF [(1, 10, 10, 64)]    0           tf_op_layer_Relu_79[0][0]        
__________________________________________________________________________________________________
conv2d_80 (Conv2D)              (1, 10, 10, 64)      36864       tf_op_layer_MaxPool_23[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_107 (TensorFlow [(1, 10, 10, 64)]    0           conv2d_80[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_80 (TensorFlow [(1, 10, 10, 64)]    0           tf_op_layer_Add_107[0][0]        
__________________________________________________________________________________________________
conv2d_81 (Conv2D)              (1, 10, 10, 64)      36864       tf_op_layer_Relu_80[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_108 (TensorFlow [(1, 10, 10, 64)]    0           conv2d_81[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_81 (TensorFlow [(1, 10, 10, 64)]    0           tf_op_layer_Add_108[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_33 (TensorFl [(1, 10, 10, 128)]   0           tf_op_layer_Relu_81[0][0]        
                                                                 tf_op_layer_Relu_80[0][0]        
__________________________________________________________________________________________________
conv2d_82 (Conv2D)              (1, 10, 10, 64)      73728       tf_op_layer_concat_33[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_109 (TensorFlow [(1, 10, 10, 64)]    0           conv2d_82[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_82 (TensorFlow [(1, 10, 10, 64)]    0           tf_op_layer_Add_109[0][0]        
__________________________________________________________________________________________________
lambda_19 (Lambda)              (1, 20, 20, 64)      0           tf_op_layer_Relu_82[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_34 (TensorFl [(1, 20, 20, 128)]   0           lambda_19[0][0]                  
                                                                 tf_op_layer_Relu_79[0][0]        
__________________________________________________________________________________________________
conv2d_83 (Conv2D)              (1, 20, 20, 64)      73728       tf_op_layer_concat_34[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_111 (TensorFlow [(1, 20, 20, 64)]    0           conv2d_83[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_83 (TensorFlow [(1, 20, 20, 64)]    0           tf_op_layer_Add_111[0][0]        
__________________________________________________________________________________________________
lambda_20 (Lambda)              (1, 40, 40, 64)      0           tf_op_layer_Relu_83[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_35 (TensorFl [(1, 40, 40, 128)]   0           lambda_20[0][0]                  
                                                                 tf_op_layer_Relu_78[0][0]        
__________________________________________________________________________________________________
conv2d_84 (Conv2D)              (1, 40, 40, 64)      73728       tf_op_layer_concat_35[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_113 (TensorFlow [(1, 40, 40, 64)]    0           conv2d_84[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_84 (TensorFlow [(1, 40, 40, 64)]    0           tf_op_layer_Add_113[0][0]        
__________________________________________________________________________________________________
lambda_21 (Lambda)              (1, 80, 80, 64)      0           tf_op_layer_Relu_84[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_36 (TensorFl [(1, 80, 80, 128)]   0           lambda_21[0][0]                  
                                                                 tf_op_layer_Relu_77[0][0]        
__________________________________________________________________________________________________
conv2d_85 (Conv2D)              (1, 80, 80, 128)     147456      tf_op_layer_concat_36[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_115 (TensorFlow [(1, 80, 80, 128)]   0           conv2d_85[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_85 (TensorFlow [(1, 80, 80, 128)]   0           tf_op_layer_Add_115[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_116 (TensorFlow [(1, 80, 80, 128)]   0           tf_op_layer_Relu_85[0][0]        
                                                                 tf_op_layer_Relu_76[0][0]        
__________________________________________________________________________________________________
lambda_22 (Lambda)              (1, 160, 160, 128)   0           tf_op_layer_Add_116[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_37 (TensorFl [(1, 160, 160, 256)] 0           lambda_22[0][0]                  
                                                                 tf_op_layer_Add_36[0][0]         
__________________________________________________________________________________________________
conv2d_86 (Conv2D)              (1, 160, 160, 64)    147456      tf_op_layer_concat_37[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_118 (TensorFlow [(1, 160, 160, 64)]  0           conv2d_86[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_86 (TensorFlow [(1, 160, 160, 64)]  0           tf_op_layer_Add_118[0][0]        
__________________________________________________________________________________________________
conv2d_87 (Conv2D)              (1, 160, 160, 32)    18432       tf_op_layer_Relu_86[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_119 (TensorFlow [(1, 160, 160, 32)]  0           conv2d_87[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_87 (TensorFlow [(1, 160, 160, 32)]  0           tf_op_layer_Add_119[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_24 (TensorF [(1, 80, 80, 32)]    0           tf_op_layer_Relu_87[0][0]        
__________________________________________________________________________________________________
conv2d_88 (Conv2D)              (1, 80, 80, 32)      9216        tf_op_layer_MaxPool_24[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_120 (TensorFlow [(1, 80, 80, 32)]    0           conv2d_88[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_88 (TensorFlow [(1, 80, 80, 32)]    0           tf_op_layer_Add_120[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_25 (TensorF [(1, 40, 40, 32)]    0           tf_op_layer_Relu_88[0][0]        
__________________________________________________________________________________________________
conv2d_89 (Conv2D)              (1, 40, 40, 32)      9216        tf_op_layer_MaxPool_25[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_121 (TensorFlow [(1, 40, 40, 32)]    0           conv2d_89[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_89 (TensorFlow [(1, 40, 40, 32)]    0           tf_op_layer_Add_121[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_26 (TensorF [(1, 20, 20, 32)]    0           tf_op_layer_Relu_89[0][0]        
__________________________________________________________________________________________________
conv2d_90 (Conv2D)              (1, 20, 20, 32)      9216        tf_op_layer_MaxPool_26[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_122 (TensorFlow [(1, 20, 20, 32)]    0           conv2d_90[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_90 (TensorFlow [(1, 20, 20, 32)]    0           tf_op_layer_Add_122[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_27 (TensorF [(1, 10, 10, 32)]    0           tf_op_layer_Relu_90[0][0]        
__________________________________________________________________________________________________
conv2d_91 (Conv2D)              (1, 10, 10, 32)      9216        tf_op_layer_MaxPool_27[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_123 (TensorFlow [(1, 10, 10, 32)]    0           conv2d_91[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_91 (TensorFlow [(1, 10, 10, 32)]    0           tf_op_layer_Add_123[0][0]        
__________________________________________________________________________________________________
conv2d_92 (Conv2D)              (1, 10, 10, 32)      9216        tf_op_layer_Relu_91[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_124 (TensorFlow [(1, 10, 10, 32)]    0           conv2d_92[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_92 (TensorFlow [(1, 10, 10, 32)]    0           tf_op_layer_Add_124[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_38 (TensorFl [(1, 10, 10, 64)]    0           tf_op_layer_Relu_92[0][0]        
                                                                 tf_op_layer_Relu_91[0][0]        
__________________________________________________________________________________________________
conv2d_93 (Conv2D)              (1, 10, 10, 32)      18432       tf_op_layer_concat_38[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_125 (TensorFlow [(1, 10, 10, 32)]    0           conv2d_93[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_93 (TensorFlow [(1, 10, 10, 32)]    0           tf_op_layer_Add_125[0][0]        
__________________________________________________________________________________________________
lambda_23 (Lambda)              (1, 20, 20, 32)      0           tf_op_layer_Relu_93[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_39 (TensorFl [(1, 20, 20, 64)]    0           lambda_23[0][0]                  
                                                                 tf_op_layer_Relu_90[0][0]        
__________________________________________________________________________________________________
conv2d_94 (Conv2D)              (1, 20, 20, 32)      18432       tf_op_layer_concat_39[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_127 (TensorFlow [(1, 20, 20, 32)]    0           conv2d_94[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_94 (TensorFlow [(1, 20, 20, 32)]    0           tf_op_layer_Add_127[0][0]        
__________________________________________________________________________________________________
lambda_24 (Lambda)              (1, 40, 40, 32)      0           tf_op_layer_Relu_94[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_40 (TensorFl [(1, 40, 40, 64)]    0           lambda_24[0][0]                  
                                                                 tf_op_layer_Relu_89[0][0]        
__________________________________________________________________________________________________
conv2d_95 (Conv2D)              (1, 40, 40, 32)      18432       tf_op_layer_concat_40[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_129 (TensorFlow [(1, 40, 40, 32)]    0           conv2d_95[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_95 (TensorFlow [(1, 40, 40, 32)]    0           tf_op_layer_Add_129[0][0]        
__________________________________________________________________________________________________
lambda_25 (Lambda)              (1, 80, 80, 32)      0           tf_op_layer_Relu_95[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_41 (TensorFl [(1, 80, 80, 64)]    0           lambda_25[0][0]                  
                                                                 tf_op_layer_Relu_88[0][0]        
__________________________________________________________________________________________________
conv2d_96 (Conv2D)              (1, 80, 80, 32)      18432       tf_op_layer_concat_41[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_131 (TensorFlow [(1, 80, 80, 32)]    0           conv2d_96[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_96 (TensorFlow [(1, 80, 80, 32)]    0           tf_op_layer_Add_131[0][0]        
__________________________________________________________________________________________________
lambda_26 (Lambda)              (1, 160, 160, 32)    0           tf_op_layer_Relu_96[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_42 (TensorFl [(1, 160, 160, 64)]  0           lambda_26[0][0]                  
                                                                 tf_op_layer_Relu_87[0][0]        
__________________________________________________________________________________________________
conv2d_97 (Conv2D)              (1, 160, 160, 64)    36864       tf_op_layer_concat_42[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_133 (TensorFlow [(1, 160, 160, 64)]  0           conv2d_97[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_97 (TensorFlow [(1, 160, 160, 64)]  0           tf_op_layer_Add_133[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_134 (TensorFlow [(1, 160, 160, 64)]  0           tf_op_layer_Relu_97[0][0]        
                                                                 tf_op_layer_Relu_86[0][0]        
__________________________________________________________________________________________________
lambda_27 (Lambda)              (1, 320, 320, 64)    0           tf_op_layer_Add_134[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_43 (TensorFl [(1, 320, 320, 128)] 0           lambda_27[0][0]                  
                                                                 tf_op_layer_Add_19[0][0]         
__________________________________________________________________________________________________
conv2d_98 (Conv2D)              (1, 320, 320, 64)    73728       tf_op_layer_concat_43[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_136 (TensorFlow [(1, 320, 320, 64)]  0           conv2d_98[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_98 (TensorFlow [(1, 320, 320, 64)]  0           tf_op_layer_Add_136[0][0]        
__________________________________________________________________________________________________
conv2d_99 (Conv2D)              (1, 320, 320, 16)    9216        tf_op_layer_Relu_98[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_137 (TensorFlow [(1, 320, 320, 16)]  0           conv2d_99[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Relu_99 (TensorFlow [(1, 320, 320, 16)]  0           tf_op_layer_Add_137[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_28 (TensorF [(1, 160, 160, 16)]  0           tf_op_layer_Relu_99[0][0]        
__________________________________________________________________________________________________
conv2d_100 (Conv2D)             (1, 160, 160, 16)    2304        tf_op_layer_MaxPool_28[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_138 (TensorFlow [(1, 160, 160, 16)]  0           conv2d_100[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_100 (TensorFlo [(1, 160, 160, 16)]  0           tf_op_layer_Add_138[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_29 (TensorF [(1, 80, 80, 16)]    0           tf_op_layer_Relu_100[0][0]       
__________________________________________________________________________________________________
conv2d_101 (Conv2D)             (1, 80, 80, 16)      2304        tf_op_layer_MaxPool_29[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_139 (TensorFlow [(1, 80, 80, 16)]    0           conv2d_101[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_101 (TensorFlo [(1, 80, 80, 16)]    0           tf_op_layer_Add_139[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_30 (TensorF [(1, 40, 40, 16)]    0           tf_op_layer_Relu_101[0][0]       
__________________________________________________________________________________________________
conv2d_102 (Conv2D)             (1, 40, 40, 16)      2304        tf_op_layer_MaxPool_30[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_140 (TensorFlow [(1, 40, 40, 16)]    0           conv2d_102[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_102 (TensorFlo [(1, 40, 40, 16)]    0           tf_op_layer_Add_140[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_31 (TensorF [(1, 20, 20, 16)]    0           tf_op_layer_Relu_102[0][0]       
__________________________________________________________________________________________________
conv2d_103 (Conv2D)             (1, 20, 20, 16)      2304        tf_op_layer_MaxPool_31[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_141 (TensorFlow [(1, 20, 20, 16)]    0           conv2d_103[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_103 (TensorFlo [(1, 20, 20, 16)]    0           tf_op_layer_Add_141[0][0]        
__________________________________________________________________________________________________
tf_op_layer_MaxPool_32 (TensorF [(1, 10, 10, 16)]    0           tf_op_layer_Relu_103[0][0]       
__________________________________________________________________________________________________
conv2d_104 (Conv2D)             (1, 10, 10, 16)      2304        tf_op_layer_MaxPool_32[0][0]     
__________________________________________________________________________________________________
tf_op_layer_Add_142 (TensorFlow [(1, 10, 10, 16)]    0           conv2d_104[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_104 (TensorFlo [(1, 10, 10, 16)]    0           tf_op_layer_Add_142[0][0]        
__________________________________________________________________________________________________
conv2d_105 (Conv2D)             (1, 10, 10, 16)      2304        tf_op_layer_Relu_104[0][0]       
__________________________________________________________________________________________________
tf_op_layer_Add_143 (TensorFlow [(1, 10, 10, 16)]    0           conv2d_105[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_105 (TensorFlo [(1, 10, 10, 16)]    0           tf_op_layer_Add_143[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_44 (TensorFl [(1, 10, 10, 32)]    0           tf_op_layer_Relu_105[0][0]       
                                                                 tf_op_layer_Relu_104[0][0]       
__________________________________________________________________________________________________
conv2d_106 (Conv2D)             (1, 10, 10, 16)      4608        tf_op_layer_concat_44[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_144 (TensorFlow [(1, 10, 10, 16)]    0           conv2d_106[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_106 (TensorFlo [(1, 10, 10, 16)]    0           tf_op_layer_Add_144[0][0]        
__________________________________________________________________________________________________
lambda_28 (Lambda)              (1, 20, 20, 16)      0           tf_op_layer_Relu_106[0][0]       
__________________________________________________________________________________________________
tf_op_layer_concat_45 (TensorFl [(1, 20, 20, 32)]    0           lambda_28[0][0]                  
                                                                 tf_op_layer_Relu_103[0][0]       
__________________________________________________________________________________________________
conv2d_107 (Conv2D)             (1, 20, 20, 16)      4608        tf_op_layer_concat_45[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_146 (TensorFlow [(1, 20, 20, 16)]    0           conv2d_107[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_107 (TensorFlo [(1, 20, 20, 16)]    0           tf_op_layer_Add_146[0][0]        
__________________________________________________________________________________________________
lambda_29 (Lambda)              (1, 40, 40, 16)      0           tf_op_layer_Relu_107[0][0]       
__________________________________________________________________________________________________
tf_op_layer_concat_46 (TensorFl [(1, 40, 40, 32)]    0           lambda_29[0][0]                  
                                                                 tf_op_layer_Relu_102[0][0]       
__________________________________________________________________________________________________
conv2d_108 (Conv2D)             (1, 40, 40, 16)      4608        tf_op_layer_concat_46[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_148 (TensorFlow [(1, 40, 40, 16)]    0           conv2d_108[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_108 (TensorFlo [(1, 40, 40, 16)]    0           tf_op_layer_Add_148[0][0]        
__________________________________________________________________________________________________
lambda_30 (Lambda)              (1, 80, 80, 16)      0           tf_op_layer_Relu_108[0][0]       
__________________________________________________________________________________________________
tf_op_layer_concat_47 (TensorFl [(1, 80, 80, 32)]    0           lambda_30[0][0]                  
                                                                 tf_op_layer_Relu_101[0][0]       
__________________________________________________________________________________________________
conv2d_109 (Conv2D)             (1, 80, 80, 16)      4608        tf_op_layer_concat_47[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_150 (TensorFlow [(1, 80, 80, 16)]    0           conv2d_109[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_109 (TensorFlo [(1, 80, 80, 16)]    0           tf_op_layer_Add_150[0][0]        
__________________________________________________________________________________________________
lambda_31 (Lambda)              (1, 160, 160, 16)    0           tf_op_layer_Relu_109[0][0]       
__________________________________________________________________________________________________
tf_op_layer_concat_48 (TensorFl [(1, 160, 160, 32)]  0           lambda_31[0][0]                  
                                                                 tf_op_layer_Relu_100[0][0]       
__________________________________________________________________________________________________
conv2d_110 (Conv2D)             (1, 160, 160, 16)    4608        tf_op_layer_concat_48[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_152 (TensorFlow [(1, 160, 160, 16)]  0           conv2d_110[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_110 (TensorFlo [(1, 160, 160, 16)]  0           tf_op_layer_Add_152[0][0]        
__________________________________________________________________________________________________
lambda_32 (Lambda)              (1, 320, 320, 16)    0           tf_op_layer_Relu_110[0][0]       
__________________________________________________________________________________________________
tf_op_layer_concat_49 (TensorFl [(1, 320, 320, 32)]  0           lambda_32[0][0]                  
                                                                 tf_op_layer_Relu_99[0][0]        
__________________________________________________________________________________________________
conv2d_111 (Conv2D)             (1, 320, 320, 64)    18432       tf_op_layer_concat_49[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_154 (TensorFlow [(1, 320, 320, 64)]  0           conv2d_111[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Relu_111 (TensorFlo [(1, 320, 320, 64)]  0           tf_op_layer_Add_154[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_155 (TensorFlow [(1, 320, 320, 64)]  0           tf_op_layer_Relu_111[0][0]       
                                                                 tf_op_layer_Relu_98[0][0]        
__________________________________________________________________________________________________
conv2d_113 (Conv2D)             (1, 160, 160, 1)     576         tf_op_layer_Add_134[0][0]        
__________________________________________________________________________________________________
conv2d_114 (Conv2D)             (1, 80, 80, 1)       1152        tf_op_layer_Add_116[0][0]        
__________________________________________________________________________________________________
conv2d_115 (Conv2D)             (1, 40, 40, 1)       2304        tf_op_layer_Add_101[0][0]        
__________________________________________________________________________________________________
conv2d_116 (Conv2D)             (1, 20, 20, 1)       4608        tf_op_layer_Add_89[0][0]         
__________________________________________________________________________________________________
conv2d_117 (Conv2D)             (1, 10, 10, 1)       4608        tf_op_layer_Add_79[0][0]         
__________________________________________________________________________________________________
conv2d_112 (Conv2D)             (1, 320, 320, 1)     576         tf_op_layer_Add_155[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Add_157 (TensorFlow [(1, 160, 160, 1)]   0           conv2d_113[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Add_159 (TensorFlow [(1, 80, 80, 1)]     0           conv2d_114[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Add_161 (TensorFlow [(1, 40, 40, 1)]     0           conv2d_115[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Add_163 (TensorFlow [(1, 20, 20, 1)]     0           conv2d_116[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Add_165 (TensorFlow [(1, 10, 10, 1)]     0           conv2d_117[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Add_156 (TensorFlow [(1, 320, 320, 1)]   0           conv2d_112[0][0]                 
__________________________________________________________________________________________________
lambda_33 (Lambda)              (1, 320, 320, 1)     0           tf_op_layer_Add_157[0][0]        
__________________________________________________________________________________________________
lambda_34 (Lambda)              (1, 320, 320, 1)     0           tf_op_layer_Add_159[0][0]        
__________________________________________________________________________________________________
lambda_35 (Lambda)              (1, 320, 320, 1)     0           tf_op_layer_Add_161[0][0]        
__________________________________________________________________________________________________
lambda_36 (Lambda)              (1, 320, 320, 1)     0           tf_op_layer_Add_163[0][0]        
__________________________________________________________________________________________________
lambda_37 (Lambda)              (1, 320, 320, 1)     0           tf_op_layer_Add_165[0][0]        
__________________________________________________________________________________________________
tf_op_layer_concat_50 (TensorFl [(1, 320, 320, 6)]   0           tf_op_layer_Add_156[0][0]        
                                                                 lambda_33[0][0]                  
                                                                 lambda_34[0][0]                  
                                                                 lambda_35[0][0]                  
                                                                 lambda_36[0][0]                  
                                                                 lambda_37[0][0]                  
__________________________________________________________________________________________________
conv2d_118 (Conv2D)             (1, 320, 320, 1)     6           tf_op_layer_concat_50[0][0]      
__________________________________________________________________________________________________
tf_op_layer_Add_167 (TensorFlow [(1, 320, 320, 1)]   0           conv2d_118[0][0]                 
__________________________________________________________________________________________________
tf_op_layer_Sigmoid (TensorFlow [(1, 320, 320, 1)]   0           tf_op_layer_Add_156[0][0]        
__________________________________________________________________________________________________
tf_op_layer_Sigmoid_1 (TensorFl [(1, 320, 320, 1)]   0           lambda_33[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Sigmoid_2 (TensorFl [(1, 320, 320, 1)]   0           lambda_34[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Sigmoid_3 (TensorFl [(1, 320, 320, 1)]   0           lambda_35[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Sigmoid_4 (TensorFl [(1, 320, 320, 1)]   0           lambda_36[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Sigmoid_5 (TensorFl [(1, 320, 320, 1)]   0           lambda_37[0][0]                  
__________________________________________________________________________________________________
tf_op_layer_Sigmoid_6 (TensorFl [(1, 320, 320, 1)]   0           tf_op_layer_Add_167[0][0]        
__________________________________________________________________________________________________
tf_op_layer_1960 (TensorFlowOpL [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid[0][0]        
__________________________________________________________________________________________________
tf_op_layer_1961 (TensorFlowOpL [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid_1[0][0]      
__________________________________________________________________________________________________
tf_op_layer_1962 (TensorFlowOpL [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid_2[0][0]      
__________________________________________________________________________________________________
tf_op_layer_1963 (TensorFlowOpL [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid_3[0][0]      
__________________________________________________________________________________________________
tf_op_layer_1964 (TensorFlowOpL [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid_4[0][0]      
__________________________________________________________________________________________________
tf_op_layer_1965 (TensorFlowOpL [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid_5[0][0]      
__________________________________________________________________________________________________
tf_op_layer_a (TensorFlowOpLaye [(1, 320, 320, 1)]   0           tf_op_layer_Sigmoid_6[0][0]      
==================================================================================================
Total params: 43,966,662
Trainable params: 43,966,662
Non-trainable params: 0
__________________________________________________________________________________________________
TensorFlow/Keras model building process complete!
saved_model output started ==========================================================
WARNING:tensorflow:From /home/user/.local/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /home/user/.local/lib/python3.6/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
ERROR: can't pickle module objects
Traceback (most recent call last):
  File "/home/user/.local/bin/openvino2tensorflow", line 1684, in convert
    tf.saved_model.save(model, model_output_path)
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 976, in save
    obj, export_dir, signatures, options, meta_graph_def)
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 1076, in _build_meta_graph
    asset_info.asset_index)
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 721, in _serialize_object_graph
    saveable_view.function_name_map)
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 761, in _write_object_proto
    metadata=obj._tracking_metadata)
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 3011, in _tracking_metadata
    return self._trackable_saved_model_saver.tracking_metadata
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 54, in tracking_metadata
    return json_utils.Encoder().encode(self.python_properties)
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 41, in python_properties
    return self._python_properties_internal()
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 35, in _python_properties_internal
    metadata = super(ModelSavedModelSaver, self)._python_properties_internal()
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 59, in _python_properties_internal
    metadata.update(get_config(self.obj))
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 118, in get_config
    config = generic_utils.serialize_keras_object(obj)['config']
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 245, in serialize_keras_object
    config = instance.get_config()
  File "/home/user/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py", line 598, in get_config
    return copy.deepcopy(get_network_config(self))
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.6/copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "/usr/lib/python3.6/copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle module objects
Switch to the output of an optimized protocol buffer file (.pb).
.pb output started ==================================================================
.pb output complete! - saved_model_320x320/model_float32.pb
WARNING:tensorflow:From /home/user/.local/bin/openvino2tensorflow:1767: simple_save (from tensorflow.python.saved_model.simple_save) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.simple_save.
WARNING:tensorflow:From /home/user/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
Optimized graph converted to SavedModel! - saved_model_320x320
All the conversion process is finished! =============================================

Can you help me out in this problem.Thanks

can not convert some model

hi ,

i wanted to convert https://download.01.org/openvinotoolkit/2018_R5/open_model_zoo/face-reidentification-retail-0095/FP32/
but getting failed with below message:
openvino2tensorflow --model_path=openvino/448x448/FP32/Resnet34_3inputs_448x448_20200609.xml --output_saved_model True --output_pb True --output_weight_quant_tflite True --output_float16_quant_tflite True --output_no_quant_float32_tflite True

openvino2tensorflow --model_path=FP32/face-reidentification-retail-0095.xml --output_saved_model True --output_pb True --output_weight_quant_tflite True --output_float16_quant_tflite True --output_no_quant_float32_tflite True
TensorFlow/Keras model building process starts ======================================
/home/vijay/.local/lib/python3.6/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
The Input layer is not yet implemented.

please let mw know if i need to do any workaorund.

Limitation from myside:
i can not used the said model in openvino 2021 version as it failed with below message saying "intel IR files is not compatible"

openvino2tensorflow --model_path=openvino/448x448/FP32/Resnet34_3inputs_448x448_20200609.xml --output_saved_model True --output_pb True --output_weight_quant_tflite True --output_float16_quant_tflite True --output_no_quant_float32_tflite True

openvino2tensorflow --model_path=FP32/face-reidentification-retail-0095.xml --output_saved_model True --output_pb True --output_weight_quant_tflite True --output_float16_quant_tflite True --output_no_quant_float32_tflite True
TensorFlow/Keras model building process starts ======================================
/home/vijay/.local/lib/python3.6/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
The Input layer is not yet implemented.

Thanks
Vijay
skpyeid ~ vijayky88

yolov5s--- tpu compiler

line 520, in _quantize
return _mlir_quantize(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 218, in wrapper
raise error from None # Re-throws the exception.
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py", line 208, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py", line 236, in mlir_quantize
return wrap_toco.wrapped_experimental_mlir_quantize(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/wrap_toco.py", line 47, in wrapped_experimental_mlir_quantize
return _pywrap_toco_api.ExperimentalMlirQuantizeModel(
RuntimeError: Failed to quantize:

i can't pypass the error
becouse of i export the onnx with "--simplify" command.. if i didnt add simplify command it give layer error 5d at layer 407

The FakeQuantize layer is not yet implemented.

Hy,
I am trying to convert openvino model to tensorflow. But when I am doing it, it is throwing an error.

[setupvars.sh] OpenVINO environment initialized
TensorFlow/Keras model building process starts ======================================
The FakeQuantize layer is not yet implemented.

Conversion of that model is necessary. Please let me know if you can implement that layer or what is other workaround for this conversion. Thanks.

PriorBoxClustered Not implemented

Hello,

When I tried to convert person-detection-0202 to .pb model, I got the following error

The PriorBoxClustered layer is not yet implemented.

Kindly have a look at it.

list index out of range while converting model

1. OS : WIndows10,

2. OS Architecture e.g. x86_64,

3. OpenVINO version 2021.2.185

4. Version of TensorFlow 2.3.1, but tried also with 2.4.1

####5 python version 3.7
####6 openvino model to convert: https://download.01.org/opencv/2021/openvinotoolkit/2021.2/open_model_zoo/models_bin/3/gaze-estimation-adas-0002/FP32/

I get this error while trying conversion:

(py_3_7) D:\work\openvino2tensorflow\openvino2tensorflow-main\openvino2tensorflow>python openvino2tensorflow.py --model_path D:\Librerie\Openvino\openvino_2021.2.185\deployment_tools\open_model_zoo\models\download\intel\gaze-estimation-adas-0002\FP16\gaze-estimation-adas-0002.xml --model_output_path . --output_pb 1
�[07mTensorFlow/Keras model building process starts�[0m ======================================
D:\work\openvino2tensorflow\py_3_7\lib\site-packages\tensorflow\python\autograph\utils\testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
Traceback (most recent call last):
  File "openvino2tensorflow.py", line 2204, in <module>
    main()
  File "openvino2tensorflow.py", line 2200, in main
    yolact, debug, debug_layer_number)
  File "openvino2tensorflow.py", line 305, in convert
    tf_layers_dict[layer_id] = Input(shape=(shape[2], shape[3], shape[1]), batch_size=shape[0], name=layer_name)
IndexError: list index out of range

to .h5 conversion issues

1. Windows 7 x64

3. OpenVINO 2021.2.185

4. TensorFlow v2.4.1

13. Issue Details

When I tried to convert the following models to .h5 model, I got the following errors:

face-detection-adas-0001.xml

https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/2/face-detection-adas-0001/FP32/

Error: The PriorBox layer is not yet implemented


facial-landmarks-35-adas-0002.xml

https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/2/facial-landmarks-35-adas-0002/FP32/

Error: MemoryError


Shapes do not match

Hi. Thank you for grate work!

I use docker image.

I try convert retinaface mobilenet v1 model to tflite

retinaface-mn.zip

openvino2tensorflow fails with next error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 332, in add
    x, y, name=name, ctx=_ctx)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 366, in add_eager_fallback
    _attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], ctx, [_dtypes.bfloat16, _dtypes.half, _dtypes.float32, _dtypes.float64, _dtypes.uint8, _dtypes.int8, _dtypes.int16, _dtypes.int32, _dtypes.int64, _dtypes.complex64, _dtypes.complex128, _dtypes.string, ])
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 265, in args_to_matching_eager
    tensor = ops.convert_to_tensor(t, ctx=ctx)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1540, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 339, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 265, in constant
    allow_broadcast=True)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 276, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 301, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 98, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/keras_tensor.py", line 274, in __array__
    'Cannot convert a symbolic Keras input/output to a numpy array. '
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1853, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 64 and 30 for '{{node tf.math.add_30/Add}} = Add[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,30,40,64], [1,30,40,30].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/bin/openvino2tensorflow", line 2364, in <module>
    main()
  File "/usr/local/bin/openvino2tensorflow", line 2360, in main
    yolact, weight_replacement_config, debug, debug_layer_number)
  File "/usr/local/bin/openvino2tensorflow", line 488, in convert
    tf_layers_dict[layer_id] = tf.math.add(tmp_layers[0], tmp_layers[1])
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 337, in add
    add, (), dict(x=x, y=y, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py", line 122, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 1450, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 952, in __call__
    input_list)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1091, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 822, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 863, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 1327, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py", line 1359, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 345, in add
    "Add", x=x, y=y, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 592, in _create_op_internal
    compute_device)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 2016, in __init__
    control_input_ops, op_def)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1856, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 64 and 30 for '{{node tf.math.add_30/Add}} = Add[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,30,40,64], [1,30,40,30].

Experimentally I found that reason of it in lines 949-953
https://github.com/PINTO0309/openvino2tensorflow/blob/main/openvino2tensorflow/openvino2tensorflow.py#L949
They permute begin and end indices but do not permute original shape. I commented them and scripts start work.

Probably strides affected too, but in my case them all are 1.

Trying to map all ops on TPU

First of all, I really appreciate your work. Yours is the only method that supports many ops and converts correctly out of many available ones. (Trust me, I've tried)

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc

Ubuntu 18.04

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

x86_64

3. Version of OpenVINO e.g. 2021.2.185, etc

2021.2.185

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc

tf_nightly==2.5.0.

5. Download URL for ONNX model, OpenVINO model, pt checkpoint, Tensorflow convertions

https://drive.google.com/drive/folders/14gs1LUbP9G1gvO-4RrY12pC21053GnkE?usp=sharing

I wanted to convert my own ONNX model (converted from Pytorch) to TFLite to be able to run on Coral TPU. I did

openvino2tensorflow \
  --model_path openvino/192x640/FP32/resnet18.xml \
  --model_output_path resnet18 \
  --output_saved_model True \
  --output_h5=True \
  --output_edgetpu=True

Issue Details

Full Log

full log here

TensorFlow/Keras model building process starts ======================================
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: Placeholder:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: Placeholder:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: Placeholder:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: Placeholder:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: Placeholder:0
TensorFlow/Keras model building process complete!
saved_model output started ==========================================================
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.nn.relu_16/Relu:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_32/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_37/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_42/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_47/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.nn.relu_16/Relu:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_32/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_37/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_42/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_47/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.nn.relu_16/Relu:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_32/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_37/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_42/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: tf.math.add_47/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: inputs:0
Switch to the output of an optimized protocol buffer file (.pb).
.pb output started ==================================================================
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.nn.relu_16/Relu:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_32/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_37/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_42/Add:0
WARNING: The weights after Upsampling (tf.compat.v1.image.resize_nearest_neighbor) are shifted to the upper left. If you do not need to generate EdgeTPU models, set --output_edgetpu False and run again. OP: model/tf.math.add_47/Add:0
.pb output complete! - resnet18/model_float32.pb
Optimized graph converted to SavedModel! - resnet18
Full Integer Quantization started ===================================================
Full Integer Quantization complete! - resnet18/model_full_integer_quant.tflite
EdgeTPU convertion started ==========================================================
Edge TPU Compiler version 15.0.340273435

Model compiled successfully in 3388 ms.

Input model: resnet18/model_full_integer_quant.tflite
Input size: 13.35MiB
Output model: resnet18/model_full_integer_quant_edgetpu.tflite
Output size: 13.79MiB
On-chip memory used for caching model parameters: 6.15MiB
On-chip memory remaining for caching model parameters: 5.75KiB
Off-chip memory used for streaming uncached model parameters: 7.03MiB
Number of Edge TPU subgraphs: 1
Total number of operations: 84
Operation log: resnet18/model_full_integer_quant_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 78
Number of operations that will run on CPU: 6

Operator                       Count      Status

CONV_2D                        1          More than one subgraph is not supported
CONV_2D                        32         Mapped to Edge TPU
RESIZE_NEAREST_NEIGHBOR        4          Mapped to Edge TPU
RESIZE_NEAREST_NEIGHBOR        1          Operation is otherwise supported, but not mapped due to some unspecified limitation
LOGISTIC                       1          More than one subgraph is not supported
PAD                            25         Mapped to Edge TPU
PAD                            1          More than one subgraph is not supported
ADD                            1          More than one subgraph is not supported
ADD                            16         Mapped to Edge TPU
MAX_POOL_2D                    1          Mapped to Edge TPU
MUL                            1          More than one subgraph is not supported

EdgeTPU convert complete! - resnet18/model_full_integer_quant_edgetpu.tflite
All the conversion process is finished! =============================================

Model compiled but not all operations are supported on Edge TPU

Number of operations that will run on Edge TPU: 78
Number of operations that will run on CPU: 6

Operator                       Count      Status

CONV_2D                        1          More than one subgraph is not supported
CONV_2D                        32         Mapped to Edge TPU
RESIZE_NEAREST_NEIGHBOR        4          Mapped to Edge TPU
RESIZE_NEAREST_NEIGHBOR        1          Operation is otherwise supported, but not mapped due to some unspecified limitation
LOGISTIC                       1          More than one subgraph is not supported
PAD                            25         Mapped to Edge TPU
PAD                            1          More than one subgraph is not supported
ADD                            1          More than one subgraph is not supported
ADD                            16         Mapped to Edge TPU
MAX_POOL_2D                    1          Mapped to Edge TPU
MUL                            1          More than one subgraph is not supported

So, following google-coral/edgetpu#317 (comment), I tried compiling again and I got

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 83
Number of operations that will run on CPU: 1

Operator                       Count      Status

MUL                            1          Mapped to Edge TPU
CONV_2D                        33         Mapped to Edge TPU
ADD                            17         Mapped to Edge TPU
MAX_POOL_2D                    1          Mapped to Edge TPU
PAD                            26         Mapped to Edge TPU
LOGISTIC                       1          Mapped to Edge TPU
RESIZE_NEAREST_NEIGHBOR        1          Operation is otherwise supported, but not mapped due to some unspecified limitation
RESIZE_NEAREST_NEIGHBOR        4          Mapped to Edge TPU

only one RESIZE_NEAREST_NEIGHBOR is mapped on CPU, which is drastically reducing the runtime. It would be great if it would be possible to map this too on TPU. What might be the reason for this one op to not map? What should be changed in the model to make it run entirely on TPU

Segmentation fault (core dumped)

@PINTO0309 Thank you for response . I am trying to convert the u2net model to tensorflow Js. I am getting error while I run the following command

openvino2tensorflow --model_path openvino/320x320/FP32/u2net_320x320_opt.xml --model_output_path saved_model_320x320 --output_saved_model True

Segmentation fault (core dumped)

I am using python 3.8 and openvino2021 for the conversion of model.Thanks

openvino2tensorflow commands opens txt file

I have install openvino2tensorflow using PIP on Windows 10. I do not want to use NVIDIA GPU (as it is currently under maintenance) and hence I haven't installed TensorRT.

When I call the command openvino2tensorflow, instead of running command, or showing help, it opens a file named openvino2tensorflow which is stored in "C:\Users<user>\AppData\Local\Programs\Python\Python37\Scripts". Can you please help me with this?

openvino2tensorflow.py does not use provided values in weight configuration JSON

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc

Windows 10

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

x64

3. Version of OpenVINO e.g. 2021.2.185, etc

openvino_2021.2.185

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc

tensorflow==2.4.1

13. Issue Details

I am trying to convert an OpenVino model for image segmentation into tflite. However, the Reshape and Transpose layers are not working correctly. In particular, the model that I am using has a Softmax layer with layer_id = 1074 and a Const layer with layer_id = 1075, both of which are sending output to layer 1076 as follows:

<edge from-layer="1074" from-port="1" to-layer="1076" to-port="0"/>
<edge from-layer="1075" from-port="1" to-layer="1076" to-port="1"/>
<edge from-layer="1076" from-port="2" to-layer="1078" to-port="0"/>
<edge from-layer="1077" from-port="1" to-layer="1078" to-port="1"/>

The input and output of Softmax Layer 1074 are shaped as (263169, 151).

The code for these layers is as follows:

<layer id="1074" name="Decoder/softmax/Softmax" type="SoftMax" version="opset1">
	<data axis="1"/>
	<input>
		<port id="0">
			<dim>263169</dim>
			<dim>151</dim>
		</port>
	</input>
	<output>
		<port id="1" precision="FP32">
			<dim>263169</dim>
			<dim>151</dim>
		</port>
	</output>
</layer>
<layer id="1075" name="Decoder/softmax/Reshape_1/Cast_121669_const627_const" type="Const" version="opset1">
	<data element_type="i64" offset="4370080" shape="4" size="32"/>
	<output>
		<port id="1" precision="I64">
			<dim>4</dim>
		</port>
	</output>
</layer>

The layer which is receiving output is Reshape layer with layer_id = 1076, which is as follows:

<layer id="1076" name="Decoder/softmax/Reshape_1" type="Reshape" version="opset1">
	<data special_zero="False"/>
	<input>
		<port id="0">
			<dim>263169</dim>
			<dim>151</dim>
		</port>
		<port id="1">
			<dim>4</dim>
		</port>
	</input>
	<output>
		<port id="2" precision="FP32">
			<dim>1</dim>
			<dim>513</dim>
			<dim>513</dim>
			<dim>151</dim>
		</port>
	</output>
</layer>

A Const layer comes after it:

<layer id="1077" name="Decoder/softmax/Reshape_1/Transpose/Cast_15806_const" type="Const" version="opset1">
	<data element_type="i64" offset="4370112" shape="4" size="32"/>
	<output>
		<port id="1" precision="I64">
			<dim>4</dim>
		</port>
	</output>
</layer>

These two layers are sending output to layer 1078, which is a Transpose layer. Its code is as follows:

<layer id="1078" name="Decoder/softmax/Reshape_1/Transpose" type="Transpose" version="opset1">
	<input>
		<port id="0">
			<dim>1</dim>
			<dim>513</dim>
			<dim>513</dim>
			<dim>151</dim>
		</port>
		<port id="1">
			<dim>4</dim>
		</port>
	</input>
	<output>
		<port id="2" precision="FP32">
			<dim>1</dim>
			<dim>151</dim>
			<dim>513</dim>
			<dim>513</dim>
		</port>
	</output>
</layer>

However, the Transpose layer is not receiving the correct input shape. The Reshape layer (1076) should send the output of shape (1, 513, 513, 151) to Transpose (1078) but instead, it is sending output with shape (1, 513, 151, 513).

To produce this issue, I have issued the command:
python openvino2tensorflow.py --model_path=model.xml --output_no_quant_float32_tflite True --weight_replacement_config weights_config.json

such that weights_config.json is as follows:

{
    "format_version": 1,
    "layers": [
		{
            "layer_id": "1077",
            "replace_mode": "direct",
            "values": [
                1,
                513,
                513,
                151
            ]
        }
    ]
}

To my understanding, this should overwrite the erroneous behavior of Reshape layer (1076) so that the Transpose layer (1078) receives the correct input shape.

However, I have received the following error:

Traceback (most recent call last):
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2216, in transpose
    return transpose_fn(a, perm, name=name)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 11589, in transpose
    x, perm, name=name, ctx=_ctx)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 11609, in transpose_eager_fallback
    _attr_T, (x,) = _execute.args_to_matching_eager([x], ctx, [])
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\eager\execute.py", line 274, in args_to_matching_eager
    t, dtype, preferred_dtype=default_dtype, ctx=ctx)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped
    return func(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 1540, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\constant_op.py", line 339, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\constant_op.py", line 265, in constant
    allow_broadcast=True)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\constant_op.py", line 276, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\constant_op.py", line 301, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\constant_op.py", line 98, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\keras_tensor.py", line 274, in __array__
    'Cannot convert a symbolic Keras input/output to a numpy array. '
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 1853, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: perm dim 512 is out of range of input rank 4 for '{{node tf.compat.v1.transpose_2/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32](Placeholder, tf.compat.v1.transpose_2/transpose/perm)' with input shapes: [1,513,151,513], [4] and with computed input tensors: input[1] = <3 512 512 150>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2135, in transpose_v2
    return transpose(a=a, perm=perm, name=name, conjugate=conjugate)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 205, in wrapper
    result = dispatch(wrapper, args, kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 122, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1450, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 952, in __call__
    input_list)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1091, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 822, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 863, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1327, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1359, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2216, in transpose
    return transpose_fn(a, perm, name=name)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 11594, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\func_graph.py", line 592, in _create_op_internal
    compute_device)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 2016, in __init__
    control_input_ops, op_def)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 1856, in _create_c_op
    raise ValueError(str(e))
ValueError: perm dim 512 is out of range of input rank 4 for '{{node tf.compat.v1.transpose_2/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32](Placeholder, tf.compat.v1.transpose_2/transpose/perm)' with input shapes: [1,513,151,513], [4] and with computed input tensors: input[1] = <3 512 512 150>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 1853, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: perm dim 512 is out of range of input rank 4 for '{{node tf.transpose/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32](Placeholder, tf.transpose/transpose/perm)' with input shapes: [1,513,151,513], [4] and with computed input tensors: input[1] = <3 512 512 150>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "openvino2tensorflow.py", line 2382, in <module>
    main()
  File "openvino2tensorflow.py", line 2378, in main
    yolact, weight_replacement_config, debug, debug_layer_number)
  File "openvino2tensorflow.py", line 1069, in convert
    print('res =', tf.transpose(tf_layers_dict[get_tf_edges_from(tf_edges, layer_id, 0)], perm=perm).shape)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 205, in wrapper
    result = dispatch(wrapper, args, kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 122, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1450, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 952, in __call__
    input_list)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1091, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 822, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 863, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1327, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1359, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2135, in transpose_v2
    return transpose(a=a, perm=perm, name=name, conjugate=conjugate)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2216, in transpose
    return transpose_fn(a, perm, name=name)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 11594, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\func_graph.py", line 592, in _create_op_internal
    compute_device)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 2016, in __init__
    control_input_ops, op_def)
  File "E:\Anaconda\envs\openvino-vyro\lib\site-packages\tensorflow\python\framework\ops.py", line 1856, in _create_c_op
    raise ValueError(str(e))
ValueError: perm dim 512 is out of range of input rank 4 for '{{node tf.transpose/transpose}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32](Placeholder, tf.transpose/transpose/perm)' with input shapes: [1,513,151,513], [4] and with computed input tensors: input[1] = <3 512 512 150>.

Not only is the Transpose layer (1078) receiving wrong input but the perm parameter of Transpose has been somehow set to <3 512 512 150>. How can I solve this?

YOLACT conversion error support

ERROR: input_layer1 layer_id=616: KerasTensor(type_spec=TensorSpec(shape=(1, 19248, 2), dtype=tf.float32, name=None), name='tf.math.multiply_1/Mul:0', description="created by layer 'tf.math.multiply_1'")
ERROR: The trace log is below.
Traceback (most recent call last):
  File "openvino2tensorflow.py", line 474, in convert
    tmp_layers = [tf_layers_dict[from_layer_id].transpose(0,2,3,1).astype(np.float32) if type(tf_layers_dict[from_layer_id]) == np.ndarray else tf_layers_dict[from_layer_id] for from_layer_id in get_tf_edges_from(tf_edges, layer_id)]
  File "openvino2tensorflow.py", line 474, in <listcomp>
    tmp_layers = [tf_layers_dict[from_layer_id].transpose(0,2,3,1).astype(np.float32) if type(tf_layers_dict[from_layer_id]) == np.ndarray else tf_layers_dict[from_layer_id] for from_layer_id in get_tf_edges_from(tf_edges, layer_id)]
ValueError: axes don't match array

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "openvino2tensorflow.py", line 477, in convert
    tmp_layers = [tf_layers_dict[from_layer_id].transpose(0,2,3,1) if type(tf_layers_dict[from_layer_id]) == np.ndarray else tf_layers_dict[from_layer_id] for from_layer_id in get_tf_edges_from(tf_edges, layer_id)]
  File "openvino2tensorflow.py", line 477, in <listcomp>
    tmp_layers = [tf_layers_dict[from_layer_id].transpose(0,2,3,1) if type(tf_layers_dict[from_layer_id]) == np.ndarray else tf_layers_dict[from_layer_id] for from_layer_id in get_tf_edges_from(tf_edges, layer_id)]
ValueError: axes don't match array

Converting instance segmentation model - ExperimentalDetectronPriorGridGenerator layer is not yet implemented.

1. Google Colab

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

3. Version of OpenVINO - 2021.1.110 - http://114.116.222.215/data/l_openvino_toolkit_p_2021.1.110.tgz

4. Version of TensorFlow - v2.4.1

5. Download URL for OpenVINO IR (.bin/.xml) model https://download.01.org/opencv/2021/openvinotoolkit/2021.2/open_model_zoo/models_bin/3/instance-segmentation-security-1025/FP32/instance-segmentation-security-1025.xml

6. URL of the repository from which the transformed model was taken

https://docs.openvinotoolkit.org/latest/omz_models_intel_instance_segmentation_security_1025_description_instance_segmentation_security_1025.html

7. Issue Details

When converting the model I get the layer not implemented error - Either layer support is not available in openvino2tensorflow or Openvino 2021.1.

Please could you tell me what is going wrong here. Thanks.

[tflite] Yolact's output "prior" is missing.

こんにちは @PINTO0309 さん。

openvino2tensorflowを使わせていただいて、yolactをtflite化しました。
onnxでの推論はできました。
onnxとtfliteのモデルをNETRONで確認したところ、出力の[19248, 4]の部分がないことに気づきました。

TFLite ONNX
image image

https://github.com/Ma-Dan/yolact/tree/onnx 
こちらのコードでtfliteの推論を試そうとしています。この出力がなくても推論はできるんでしょうか。
よろしくお願いします。

Trouble Converting vehicle-license-plate-detection-barrier-0106 Model

UBUNTU 18
TensorFlow v2.4.1
OpenVINO 2021.4

I want to convert the open model zoo , vehicle-license-plate-detection-barrier-0106 into saved_model format, Using the following command:
python3 openvino2tensorflow.py --model_path /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml --model_output_path /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP16/weights --output_saved_model

But I recieve this log:
===== ERROR: Cannot convert 2147483647.0 to EagerTensor of dtype int64 ERROR: model_path : /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml ERROR: weights_path: /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin ERROR: layer_id : 405 ERROR: input_layer0 layer_id=404: tf.Tensor([3], shape=(1,), dtype=int64) ERROR: The trace log is below. Traceback (most recent call last): File "openvino2tensorflow.py", line 555, in convert clip_value_max=cmax File "/home/hessam/.local/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/home/hessam/.local/lib/python3.6/site-packages/tensorflow/python/ops/clip_ops.py", line 111, in clip_by_value t_min = math_ops.minimum(values, clip_value_max) File "/home/hessam/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 5929, in minimum _ctx, "Minimum", name, x, y) TypeError: Cannot convert 2147483647.0 to EagerTensor of dtype int64

How can I see the weights of this layer and is there a way to convert this model into saved_model?

Gather got multiple indices

Currently the conversion work fine so far until at Gather operation
It received multiple indices based on the screenshot below which will trigger:
*** TypeError: only size-1 arrays can be converted to Python scalars

Screenshot from 2020-10-24 11-12-36

I tried take a look at Netron but don't know what to do based on the graph
Screenshot from 2020-10-24 11-06-45

Thanks

Converted model outputs are not as expected

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc

Colab

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

Colab

3. Version of OpenVINO e.g. 2021.2.185, etc

openvino_2021.1.110

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc

tensorflow-2.4.1

5. Version of TensorRT e.g. TensorRT6.0 GA, etc

6. Version of TFJS e.g. 1.5.0, etc

7. Version of coremltools e.g. 4.0, etc

8. Version of ONNX e.g. v1.8.0, etc

onnx-1.8.1

9. Download URL for ONNX model

!pip install onnx

10. Download URL for OpenVINO IR (.bin/.xml) model

http://114.116.222.215/data/l_openvino_toolkit_p_2021.1.110.tgz

11. URL of the repository from which the transformed model was taken:

https://drive.google.com/uc?id=1MLC2lKnQvAQgBKZP1EXB6UdmqujY9qVd

12. URL or source code for simple inference testing code

https://colab.research.google.com/drive/1qiBH8e5Kqxx0AfAb9jiRMxSrqW_OMy7T?usp=sharing

13. Issue Details

Hi, thanks for the really great work!
I'm sharing my Colab notebook with all the steps to replicate the issue. Basically what I obtain is a converted model that is quite similar to the original ONNX model but not really similar, in the sense that using the same inputs, the outputs are slightly different. Using different input pattern, the difference can be even higher. Am I doing something wrong? Thanks.

value error 5d tensor error

im using the docker imagee

13. Issue Details

i have this errror
user@d082b4901cc4:~/workdir/Desktop/quanz/yolov5$ sudo openvino2tensorflow --model_path openvino/exp16.xml --model_output_path tf_tflite --output_full_integer_quant_tflite
TensorFlow/Keras model building process starts ======================================
ERROR: Cannot reshape a tensor with 6084 elements to shape [1,3,12,13,36] (16848 elements) for '{{node tf.reshape/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT64](Placeholder, tf.reshape/Reshape/shape)' with input shapes: [1,13,13,36], [5] and with input tensors computed as partial shapes: input[1] = [1,3,12,13,36].
ERROR: model_path : openvino/exp16.xml
ERROR: weights_path: openvino/exp16.bin
ERROR: layer_id : 407
ERROR: input_layer0 layer_id=385: KerasTensor(type_spec=TensorSpec(shape=(1, 13, 13, 36), dtype=tf.float32, name=None), name='tf.math.add_66/Add:0', description="created by layer 'tf.math.add_66'")
ERROR: input_layer1 layer_id=406: tf.Tensor([ 1 3 12 13 36], shape=(5,), dtype=int64)
ERROR: The trace log is below.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 196, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 8398, in reshape
return reshape_eager_fallback(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 8419, in reshape_eager_fallback
_attr_T, (tensor,) = _execute.args_to_matching_eager([tensor], ctx, [])
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 273, in args_to_matching_eager
tensor = ops.convert_to_tensor(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 346, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 271, in constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 283, in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 308, in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/keras_tensor.py", line 244, in array
raise TypeError(
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1880, in _create_c_op
c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot reshape a tensor with 6084 elements to shape [1,3,12,13,36] (16848 elements) for '{{node tf.reshape/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT64](Placeholder, tf.reshape/Reshape/shape)' with input shapes: [1,13,13,36], [5] and with input tensors computed as partial shapes: input[1] = [1,3,12,13,36].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/bin/openvino2tensorflow", line 1592, in convert
tf_layers_dict[layer_id] = tf.reshape(op1, shape)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 210, in wrapper
result = dispatch(wrapper, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 126, in dispatch
result = dispatcher.handle(op, args, kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/core.py", line 1473, in handle
return TFOpLambda(op)(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 976, in call
return self._functional_construction_call(inputs, args, kwargs,
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1114, in _functional_construction_call
outputs = self._keras_tensor_symbolic_call(
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 848, in _keras_tensor_symbolic_call
return self._infer_output_signature(inputs, args, kwargs, input_masks)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 888, in _infer_output_signature
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/core.py", line 1350, in _call_wrapper
return self._call_wrapper(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/core.py", line 1382, in _call_wrapper
result = self.function(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/array_ops.py", line 196, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 8403, in reshape
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/op_def_library.py", line 748, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/func_graph.py", line 599, in _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 3561, in _create_op_internal
ret = Operation(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 2041, in init
self._c_op = _create_c_op(self._graph, node_def, inputs,
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py", line 1883, in _create_c_op
raise ValueError(str(e))
ValueError: Cannot reshape a tensor with 6084 elements to shape [1,3,12,13,36] (16848 elements) for '{{node tf.reshape/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT64](Placeholder, tf.reshape/Reshape/shape)' with input shapes: [1,13,13,36], [5] and with input tensors computed as partial shapes: input[1] = [1,3,12,13,36].

single output in u2net onnx model from pytorch

@PINTO0309 amazing work, I am performing U2NET conversion from pytorch to onnx with single output i.e d0 instead of 7(d0,d1,d2,d3,d4,d5,d6).

I followed the blog you shared, and another blog post which follows your blog. But after conversion instead of single sigmoid function for d0, it gives me 7 sigmoid function for all 7 output variables even i have explicitly mentioned only d0 sigmoid function in output names in the command used to convert into onnx.
Initially I tried this command:

python3 ${INTEL_OPENVINO_DIR}/deployment_tools/tools/model_downloader/pytorch_to_onnx.py
--import-module model.u2net
--model-name U2NETP
--input-shape 1,3,320,320
--weights saved_models/u2netp/u2netp.pth
--output-file u2netp_320x320.onnx
--input-names "x"
--output-names "F.sigmoid(d0)"

https://qiita.com/PINTO/items/ed06e03eb5c007c2e102#6-6-2-generate-onnx-using-pytorch_to_onnxpy-a-backend-module-of-openvinos-model_downloader

Then I tried this command from another blog post.

python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/pytorch_to_onnx.py
--import-module model.u2net
--model-name U2NETP
--input-shape 1,3,${SIZE},${SIZE}
--weights saved_models/u2netp/u2netp.pth
--output-file u2netp_${SIZE}x${SIZE}.onnx --input-names "x"
--output-names "a/F.sigmoid(d0)"

https://dannadori.medium.com/convert-pytorch-model-to-tensorflowjs-fb3bc8e90589

But after successful conversion into onnx, i opened the model in NETRON(a network visualization package) to see the output but instead of 1 output variable it's converting with all 7 variables.

ValueError: Dimensions must be equal. NCHW v NHWC inconsistency

1. OS you are using e.g. Ubuntu 20.04, WIndows10, etc

Ubuntu 18.04

2. OS Architecture e.g. x86_64, armv7l, aarch64, etc

x86 64

3. Version of OpenVINO e.g. 2021.2.185, etc

2021.2.0-1877-176bdf51370

4. Version of TensorFlow e.g. v2.4.1, tf-nightly==2.5.0.dev20210128, etc

2.4.1

8. Version of ONNX e.g. v1.8.0, etc

1.7.0

9. Download URL for ONNX model

my onnx model is located here

For OpenVino conversion, I did

python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model resnet18.onnx --input_shape [1,3,192,640] --output_dir openvino/192x640/FP32 --data_type FP32

and then for TFLite, I did

openvino2tensorflow \
  --model_path openvino/192x640/FP32/resnet18.xml \
  --model_output_path resnet18 \
  --output_saved_model True \
  --output_edgetpu=True

13. Issue Details

I get the following error. I think there is some inconsistency while NCHW to NHWC conversion.

TensorFlow/Keras model building process starts ======================================
Traceback (most recent call last):
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 332, in add
    x, y, name=name, ctx=_ctx)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 366, in add_eager_fallback
    _attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], ctx, [_dtypes.bfloat16, _dtypes.half, _dtypes.float32, _dtypes.float64, _dtypes.uint8, _dtypes.int8, _dtypes.int16, _dtypes.int32, _dtypes.int64, _dtypes.complex64, _dtypes.complex128, _dtypes.string, ])
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 265, in args_to_matching_eager
    tensor = ops.convert_to_tensor(t, ctx=ctx)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
    return func(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1540, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 339, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 265, in constant
    allow_broadcast=True)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 276, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 301, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 98, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/keras_tensor.py", line 274, in __array__
    'Cannot convert a symbolic Keras input/output to a numpy array. '
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1853, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 1920 and 3 for '{{node tf.math.add_55/Add}} = Add[T=DT_FLOAT](Placeholder, tf.math.add_55/Add/y)' with input shapes: [1,1920,85,3], [1,3,1920,85].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/e2r/Desktop/e2r/easy2ride_pipeline/src/openvino2tensorflow/scripts/openvino2tensorflow", line 478, in convert
    tf_layers_dict[layer_id] = tf.math.add(tf_layers_dict[edge_id0], tf_layers_dict[edge_id1])
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 337, in add
    add, (), dict(x=x, y=y, name=name)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 122, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 1450, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 952, in __call__
    input_list)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1091, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 822, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 863, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 1327, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 1359, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 345, in add
    "Add", x=x, y=y, name=name)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 592, in _create_op_internal
    compute_device)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2016, in __init__
    control_input_ops, op_def)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1856, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 1920 and 3 for '{{node tf.math.add_55/Add}} = Add[T=DT_FLOAT](Placeholder, tf.math.add_55/Add/y)' with input shapes: [1,1920,85,3], [1,3,1920,85].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 332, in add
    x, y, name=name, ctx=_ctx)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 366, in add_eager_fallback
    _attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], ctx, [_dtypes.bfloat16, _dtypes.half, _dtypes.float32, _dtypes.float64, _dtypes.uint8, _dtypes.int8, _dtypes.int16, _dtypes.int32, _dtypes.int64, _dtypes.complex64, _dtypes.complex128, _dtypes.string, ])
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 265, in args_to_matching_eager
    tensor = ops.convert_to_tensor(t, ctx=ctx)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
    return func(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1540, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 339, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 265, in constant
    allow_broadcast=True)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 276, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 301, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 98, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/keras_tensor.py", line 274, in __array__
    'Cannot convert a symbolic Keras input/output to a numpy array. '
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1853, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 1920 and 3 for '{{node tf.math.add_56/Add}} = Add[T=DT_FLOAT](Placeholder, tf.math.add_56/Add/y)' with input shapes: [1,1920,85,3], [1,3,1920,85].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/e2r/anaconda3/envs/e2r/bin/openvino2tensorflow", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/home/e2r/Desktop/e2r/easy2ride_pipeline/src/openvino2tensorflow/scripts/openvino2tensorflow", line 2366, in <module>
    main()
  File "/home/e2r/Desktop/e2r/easy2ride_pipeline/src/openvino2tensorflow/scripts/openvino2tensorflow", line 2362, in main
    yolact, weight_replacement_config, debug, debug_layer_number)
  File "/home/e2r/Desktop/e2r/easy2ride_pipeline/src/openvino2tensorflow/scripts/openvino2tensorflow", line 483, in convert
    tf_layers_dict[layer_id] = tf.math.add(tf_layers_dict[edge_id0], tf_layers_dict[edge_id1])
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 337, in add
    add, (), dict(x=x, y=y, name=name)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 122, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 1450, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 952, in __call__
    input_list)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1091, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 822, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 863, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 1327, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py", line 1359, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 345, in add
    "Add", x=x, y=y, name=name)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 592, in _create_op_internal
    compute_device)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2016, in __init__
    control_input_ops, op_def)
  File "/home/e2r/anaconda3/envs/e2r/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1856, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 1920 and 3 for '{{node tf.math.add_56/Add}} = Add[T=DT_FLOAT](Placeholder, tf.math.add_56/Add/y)' with input shapes: [1,1920,85,3], [1,3,1920,85].

Yolo V5 conversion problem: How can I properly make weight replacement file

Hello. First, I want to thank you for this awesome repository.

I am trying to convert yolov5 model from onnx (coming from .pt format) to .tflite (with int8 quantized). The problem can be narrowed down to the fact that our model has a 5D Reshape layer, which -as per mentioned in readme.md- is reported to have problems currently.
Your solution is "writing weights replacement file". Unfortunately, this parts lacks detailed documentation. Going straight to my question, I want to know how can write a replacement file for the layer(s) causing problems.
it is clear that the replacement json needs values, and I am not sure how can I get those values to put inside my own replacement file.
I have pt model, onnx model and OpenVino model. How can get the replacement fields from (any) of those?
Thanks in advance, Regards.

openvino convert list index out of range

OS Ubuntu 20.04

OS Architecture x86_64

Version of OpenVINO 2021.3.0

Version of TensorFlow v2.4.1

Version of ONNX v1.8.1

Download URL for ONNX, Optimized ONNX and openvino: https://drive.google.com/file/d/1wHqUoqW42jq2r4GIdfAh1wam7NS18eZD/view?usp=sharing

Issue Details

I am following your blog post (https://qiita.com/PINTO/items/ed06e03eb5c007c2e102#6-8-onnx---openvino-ir-conversion) and trying to convert OSnet to tflite. I saw someone else was succesfull in doing so but I get the following error when trying to convert the OpenVINO model to a tensorflow saved_model.

openvino2tensorflow --model_path=openvino/osnet_opt.xml  --output_saved_model True

TensorFlow/Keras model building process starts ======================================
Traceback (most recent call last):
  File "/home/ferrejanssen/anaconda3/envs/fastreid/bin/openvino2tensorflow", line 2437, in <module>
    main()
  File "/home/ferrejanssen/anaconda3/envs/fastreid/bin/openvino2tensorflow", line 2433, in main
    yolact, weight_replacement_config, debug, debug_layer_number)
  File "/home/ferrejanssen/anaconda3/envs/fastreid/bin/openvino2tensorflow", line 449, in convert
    if temp.shape[0] == 1 and temp.shape[2] == 1 and temp.shape[3] == 1:
  File "/home/ferrejanssen/anaconda3/envs/fastreid/lib/python3.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 889, in __getitem__
    return self._dims[key].value
IndexError: list index out of range

Converting a custom PyTorch UNet for Fully Quantized TFLite for Edge TPU

First of all, thank you very much for your work. I've found your blog post on PyTorch conversion very informative.

My goal was to use your package to get the model from NCHW into NHWC without having Transpose layers everywhere in my quantized tflite model, so that it can run on an Edge TPU efficiently.


I largely followed your tutorial and was able to do PyTorch -> ONNX -> OpenVINO

However, an error occurs when using openvino2tensorflow (I used your Docker image, so I think dependencies aren't an issue here)

I received this error:

ERROR: Dimension 1 in both shapes must be equal, but are 5 and 4. Shapes are [1,5,64] and [1,4,64]. for '{{node tf.concat/concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT3
2](Placeholder, Placeholder_1, tf.concat/concat/axis)' with input shapes: [1,5,64,48], [1,4,64,48], [] and with computed input tensors: input[2] = <-1>.
ERROR: model_path  : openvino/unet/fp32/unet_deblur_opt.xml
ERROR: weights_path: openvino/unet/fp32/unet_deblur_opt.bin
ERROR: layer_id    : 52
ERROR: input_layer0 layer_id=32: KerasTensor(type_spec=TensorSpec(shape=(1, 5, 64, 48), dtype=tf.float32, name=None), name='tf.nn.relu_5/Relu:0', description="created by l
ayer 'tf.nn.relu_5'")
ERROR: input_layer1 layer_id=51: KerasTensor(type_spec=TensorSpec(shape=(1, 4, 64, 48), dtype=tf.float32, name=None), name='tf.identity/Identity:0', description="created b
y layer 'tf.identity'")

(side note: this is the most informative error messages I've seen in my journey of trying out various conversion packages)

Based on those messages, I used Netron to inspect the xml file produced by OpenVINO. I was pretty baffled by it, because from the graph itself, it doesn't look like the dimensions are mismatched
This is layer 32, simply a ReLU operation (shown at the very top):
image

And this is layer 51, a Pad operation:
image

Even looking at the layer where the dimension mismatched supposedly happen, layer 52 a Concat operation, the shapes of the two inputs are correct as [1, 48, 5, 64] and [1, 48, 5, 64], there is no mismatch
image

Is it possible that somehow the Pad operation isn't running properly? I did find it a little strange that the Pad layer becomes an tf.identity according to the error message ERROR: input_layer1 layer_id=51: KerasTensor(type_spec=TensorSpec(shape=(1, 4, 64, 48), dtype=tf.float32, name=None), name='tf.identity/Identity:0', description="created b y layer 'tf.identity'")


Please let me know if there is anything else you want me to elaborate on, I'm happy to provide more details

One minor note is that I didn't fully follow your blog post for the PyTorch -> ONNX step. Instead of using the backend module of OpenVIO's model downloader, I just did torch.onnx.export on my own, where the hyperparameter settings I used were

export_params=True
do_constant_folding=True
opset_version=11

I used onnxsim to further optimize the ONNX model as well

[Yolact] Cannot convert ONNX model with NMS to tflite

I introduced NMS to yolact's ONNX model in my own unique way.
However, after converting to OpenVINO, I am unable to convert to tflite.

os etc

windows10
cuda10.2

Version

python=3.6.8
tensorflow=2.5.0
onnx=1.9.0
openvino2tensorflow=1.15.3
pytorch=1.8.1+cu102

How to add an NMS

  1. Make the code modification as shown in this URL.
  2. Change the output of the Yolact class in the modified yolact.py to look like the following.
return self.detect(pred_outs)
  1. The loc and priors in detection.py are gone, and decode_boxes is defined instead.
# loc_data   = predictions['loc']
conf_data  = predictions['conf']
mask_data  = predictions['mask']
# prior_data = predictions['priors']
decoded_boxes = predictions['boxes'] # [1,19248,4]
  1. The continuation of number 3 was as follows.
proto_data = predictions['proto'] if 'proto' in predictions else None
inst_data  = predictions['inst']  if 'inst'  in predictions else None

out = []

with timer.env('Detect'):
    batch_size = 1
    num_priors = 19248

    conf_preds = conf_data.view(batch_size, num_priors, self.num_classes).transpose(2, 1).contiguous()

    for batch_idx in range(batch_size):
        result = self.detect(batch_idx, conf_preds, decoded_boxes[0], mask_data, inst_data)

        if result is not None and proto_data is not None:
            result['proto'] = proto_data[batch_idx]

        out.append(result)

return out
  1. Next, we move on to the NMS process. Here, I thought of adding NMS using torchvision.ops.nms
def pytorch_nms(self, boxes, decode_boxes, masks, conf_scores, scores, iou_threshold:float=0.5, top_k:int=200):
    scores, classes = scores.max(dim=0)
    
    _, idx = scores.sort(0, descending=True)
    idx = idx[:top_k]
    
    id = (conf_scores > self.conf_thresh)
    conf_scores = conf_scores[id]
    decode_boxes = decode_boxes[id, :]
    
    keep = torchvision.ops.nms(boxes=decode_boxes, scores=conf_scores, iou_threshold=iou_threshold).to(torch.int64)
    
    return boxes[keep], masks[keep], classes[keep], scores[keep]
  1. Convert to ONNX.
torch.onnx.export(net, dummy, "yolact_detect_pytorchnms10.onnx", opset_version=11, input_names=['input'], output_names=['boxes', 'masks', 'classes', 'scores', 'proto'])
# If opset_version<=10, torchvision.ops.nms could not be converted.

Error in openvino2tensorflow

command

python openvino2tensorflow.py --model_path yolact_detect_pytorchnms11.xml --model_output_path yolact --output_saved_model --output_no_quant_float32_tflite --output_integer_quant_tflite --output_full_integer_quant_tflite

result

�[07mTensorFlow/Keras model building process starts�[0m ======================================
ERROR: Value passed to parameter 'indices' has DataType bool not in list of allowed values: int32, int64
ERROR: model_path  : yolact_detect_pytorchnms11.xml
ERROR: weights_path: yolact_detect_pytorchnms11.bin
ERROR: layer_id    : 646
ERROR: input_layer0 layer_id=642: KerasTensor(type_spec=TensorSpec(shape=(19248, 4), dtype=tf.float32, name=None), name='tf.compat.v1.squeeze_2/Squeeze:0', description="created by layer 'tf.compat.v1.squeeze_2'")
ERROR: input_layer1 layer_id=645: KerasTensor(type_spec=TensorSpec(shape=(None, 1), dtype=tf.bool, name=None), name='tf.compat.v1.transpose_12/transpose:0', description="created by layer 'tf.compat.v1.transpose_12'")
ERROR: The trace log is below.
Traceback (most recent call last):
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5350, in gather_nd
    return params.gather_nd(indices, name=name)
AttributeError: 'KerasTensor' object has no attribute 'gather_nd'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5352, in gather_nd
    return gen_array_ops.gather_nd(params, indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 3719, in gather_nd
    params, indices, name=name, ctx=_ctx)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 3739, in gather_nd_eager_fallback
    _attr_Tparams, (params,) = _execute.args_to_matching_eager([params], ctx, [])
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\eager\execute.py", line 274, in args_to_matching_eager
    t, dtype, preferred_dtype=default_dtype, ctx=ctx)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\profiler\trace.py", line 163, in wrapped
    return func(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1566, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 339, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 265, in constant
    allow_broadcast=True)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 276, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 301, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\constant_op.py", line 98, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\keras_tensor.py", line 255, in __array__
    'Cannot convert a symbolic Keras input/output to a numpy array. '
TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5350, in gather_nd
    return params.gather_nd(indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 401, in __getattr__
    self.__getattribute__(name)
AttributeError: 'Tensor' object has no attribute 'gather_nd'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5360, in gather_nd_v2
    return gather_nd(params, indices, name=name, batch_dims=batch_dims)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 210, in wrapper
    result = dispatch(wrapper, args, kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 126, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1486, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 970, in __call__
    input_list)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1108, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 840, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 880, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1363, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1395, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5352, in gather_nd
    return gen_array_ops.gather_nd(params, indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 3724, in gather_nd
    "GatherNd", params=params, indices=indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 630, in _apply_op_helper
    param_name=input_name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 63, in _SatisfiesTypeConstraint
    ", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'indices' has DataType bool not in list of allowed values: int32, int64

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5350, in gather_nd
    return params.gather_nd(indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 401, in __getattr__
    self.__getattribute__(name)
AttributeError: 'Tensor' object has no attribute 'gather_nd'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "openvino2tensorflow.py", line 1362, in convert
    tf_layers_dict[layer_id] = tf.gather_nd(params, indices, batch_dims=batch_dims)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 210, in wrapper
    result = dispatch(wrapper, args, kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 126, in dispatch
    result = dispatcher.handle(op, args, kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1486, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 970, in __call__
    input_list)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1108, in _functional_construction_call
    inputs, input_masks, args, kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 840, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 880, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1363, in _call_wrapper
    return self._call_wrapper(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\layers\core.py", line 1395, in _call_wrapper
    result = self.function(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5360, in gather_nd_v2
    return gather_nd(params, indices, name=name, batch_dims=batch_dims)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 5352, in gather_nd
    return gen_array_ops.gather_nd(params, indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 3724, in gather_nd
    "GatherNd", params=params, indices=indices, name=name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 630, in _apply_op_helper
    param_name=input_name)
  File "C:\Users\20-0365\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 63, in _SatisfiesTypeConstraint
    ", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'indices' has DataType bool not in list of allowed values: int32, int64

thanks

ValueError: invalid literal for int() with base 10: 'false'

hello @PINTO0309 Thank you for sharing。When I started the conversion, the following error occurred,I have no idea.
setsopenvino2tensorflow
TensorFlow/Keras model building process starts ======================================
Traceback (most recent call last):
File "/usr/local/bin/openvino2tensorflow", line 2433, in
main()
File "/usr/local/bin/openvino2tensorflow", line 2429, in main
yolact, weight_replacement_config, debug, debug_layer_number)
File "/usr/local/bin/openvino2tensorflow", line 822, in convert
antialias = False if int(data.attrib['antialias']) == 0 else True
ValueError: invalid literal for int() with base 10: 'false'

OpenVINO MobileNetV2 model conversion

Hi,

Could you please with the following sample? https://docs.openvinotoolkit.org/2021.2/omz_models_intel_person_vehicle_bike_detection_2002_description_person_vehicle_bike_detection_2002.html

Is is possible to convert?

python3 openvino2tensorflow.py --model_path=tmp/FP32/person-vehicle-bike-detection-2002.xml --output_saved_model True --output_pb True
Output:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
The PriorBoxClustered layer is not yet implemented.

Onnx to OpenVino .. cant find the mo.py file

hi @khursani8 @PINTO0309

I was following the blog

I have used the docker setup provided on the repo

$ docker pull pinto0309/openvino2tensorflow
or
$ docker build -t pinto0309/openvino2tensorflow:latest .

# If you don't need to access the GUI of the HostPC and the USB camera.
$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  pinto0309/openvino2tensorflow:latest

when I want to convert the onnx model to openvino, I am not able to get the installation dir of openvino to be input here in the {INTEL_OPENVINO_DIR} path

$ python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo.py \
  --input_model u2netp_320x320_opt.onnx \
  --input_shape [1,3,320,320] \
  --output_dir openvino/320x320/FP32 \
  --data_type FP32

Does the docker comes with openvino setup .. if so what path should be inputed

matlabは対応していますか

matlab2020で作成したonnx形式モデルをtensorflow lite形式に変換したいと考えています。

このツールではmatlabから生成したonnxをopenvinoで変換したファイルは対応していますか?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.