Giter Site home page Giter Site logo

tutorials's Introduction

PyPI - Version CI CII Best Practices OpenSSF Scorecard REUSE compliant Ruff Black

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).

ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.

Use ONNX

Learn about the ONNX spec

Programming utilities for working with ONNX Graphs

Contribute

ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.

Check out our contribution guide to get started.

If you think some operator should be added to ONNX specification, please read this document.

Community meetings

The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here

Community Meetups are held at least once a year. Content from previous community meetups are at:

Discuss

We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.

Follow Us

Stay up to date with the latest ONNX news. [Facebook] [Twitter]

Roadmap

A roadmap process takes place every year. More details can be found here

Installation

Official Python packages

ONNX released packages are published in PyPi.

pip install onnx  # or pip install onnx[reference] for optional reference implementation dependencies

ONNX weekly packages are published in PyPI to enable experimentation and early testing.

vcpkg packages

onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx

Conda packages

A binary build of ONNX is available from Conda, in conda-forge:

conda install -c conda-forge onnx

Build ONNX from Source

Before building from source uninstall any existing versions of onnx pip uninstall onnx.

c++17 or higher C++ compiler version is required to build ONNX from source. Still, users can specify their own CMAKE_CXX_STANDARD version for building ONNX.

If you don't have protobuf installed, ONNX will internally download and build protobuf for ONNX build.

Or, you can manually install protobuf C/C++ libraries and tools with specified version before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:

Linux:

export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

Windows:

set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"

The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.

Windows

If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.21.12.

The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.

You can get protobuf by running the following commands:

git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v21.12
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release

Then it will be built as a static library and installed to <protobuf_install_dir>. Please add the bin directory(which contains protoc.exe) to your PATH.

set CMAKE_PREFIX_PATH=<protobuf_install_dir>;%CMAKE_PREFIX_PATH%

Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.

Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.

set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Linux

First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON.

Ubuntu 20.04 (and newer) users may choose to install protobuf via

apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler

In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON to CMAKE_ARGS in the ONNX build step.

A more general way is to build and install it from source. See the instructions below for more details.

Installing Protobuf from source

Debian/Ubuntu:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

CentOS/RHEL/Fedora:

  git clone https://github.com/protocolbuffers/protobuf.git
  cd protobuf
  git checkout v21.12
  git submodule update --init --recursive
  mkdir build_source && cd build_source
  cmake ../cmake  -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
  make -j$(nproc)
  make install

Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Mac

export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.12/protobuf-cpp-3.21.12.tar.gz
tar -xvf protobuf-cpp-3.21.12.tar.gz
cd protobuf-3.21.12
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install

Once build is successful, update PATH to include protobuf paths.

Then you can build ONNX as:

git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e . -v

Verify Installation

After installation, run

python -c "import onnx"

to verify it works.

Common Build Options

For full list refer to CMakeLists.txt

Environment variables

  • USE_MSVC_STATIC_RUNTIME should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library. Default: USE_MSVC_STATIC_RUNTIME=0

  • DEBUG should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d at the end of the package name lines. For example, NAMES protobuf-lite would become NAMES protobuf-lited. Default: Debug=0

CMake variables

  • ONNX_USE_PROTOBUF_SHARED_LIBS should be ON or OFF. Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0 ONNX_USE_PROTOBUF_SHARED_LIBS determines how onnx links to protobuf libraries.

    • When set to ON - onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF and USE_MSVC_STATIC_RUNTIME must be 0.
    • When set to OFF - onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON (to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME can be 0 or 1.
  • ONNX_USE_LITE_PROTO should be ON or OFF. When set to ON onnx uses lite protobuf instead of full protobuf. Default: ONNX_USE_LITE_PROTO=OFF

  • ONNX_WERROR should be ON or OFF. When set to ON warnings are treated as errors. Default: ONNX_WERROR=OFF in local builds, ON in CI and release pipelines.

Common Errors

  • Note: the import onnx command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'. Change into another directory to fix this error.

  • If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.

  • If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib, ensure that you have consistent Python versions for common commands, such as python and pip. Clean all existing build files and rebuild ONNX again.

Testing

ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest:

pip install pytest nbval

After installing pytest, use the following command to run tests.

pytest

Development

Check out the contributor guide for instructions.

License

Apache License v2.0

Code of Conduct

ONNX Open Source Code of Conduct

tutorials's People

Contributors

bddppq avatar cesardelatorre avatar chinhuang007 avatar dzhulgakov avatar emmaningms avatar faxu avatar houseroad avatar jantonguirao avatar jcwchen avatar jspisak avatar lupesko avatar marouenez avatar mbeissinger avatar michaelulin avatar mina-am avatar mitmul avatar moazreyad avatar msakai avatar mx-iao avatar neginraoof avatar nlutsenko avatar prasanthpul avatar rajanksin avatar roshrini avatar rwilliams58 avatar saurabhtangri avatar smessmer avatar szha avatar tianleiwu avatar winnietsang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tutorials's Issues

pytorch shufflenet export to onnx failed

It shows: "RuntimeError: ONNX export failed: Couldn't export operator aten::max_pool2d_with_indices"
but i scan the '/torch/onnx/symbolic.py' file, and it already has the max_pool2d_with_indices function
@parse_args('v', 'is', 'is', 'is', 'is', 'i') def max_pool2d_with_indices(g, input, kernel_size, stride, padding, dilation, ceil_mode): if ceil_mode: return _unimplemented("max_pool2d_with_indices", "ceil_mode") if set(_pair(dilation)) != {1}: return _unimplemented("max_pool2d_with_indices", "dilation") if not stride: stride = kernel_size r = g.op("MaxPool", input, kernel_shape_i=_pair(kernel_size), pads_i=_pair(padding) * 2, strides_i=_pair(stride)) return r, None
anyone help?

/home/parker/work/book_pytorch_demo/shufflenet_v2.py:57: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
x1 = x[:, :(x.shape[1]//2), :, :]
/home/parker/work/book_pytorch_demo/shufflenet_v2.py:58: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
x2 = x[:, (x.shape[1]//2):, :, :]
/home/parker/anaconda2/envs/demo/lib/python3.6/site-packages/torch/onnx/symbolic.py:130: UserWarning: ONNX export failed on max_pool2d_with_indices because ceil_mode not supported
warnings.warn("ONNX export failed on " + op + " because " + msg + " not supported")
/home/parker/anaconda2/envs/demo/lib/python3.6/site-packages/torch/onnx/utils.py:500: UserWarning: ONNX export failed on ATen operator reshape because torch.onnx.symbolic.reshape does not exist
.format(op_name, op_name))
Traceback (most recent call last):
File "shufflenet.py", line 24, in
export_params=True) # store the trained parameter weights inside the model file
File "/home/parker/anaconda2/envs/demo/lib/python3.6/site-packages/torch/onnx/init.py", line 22, in _export
return utils._export(*args, **kwargs)
File "/home/parker/anaconda2/envs/demo/lib/python3.6/site-packages/torch/onnx/utils.py", line 286, in _export
proto, export_map = graph.export(params, _onnx_opset_version, defer_weight_export, operator_export_type)
RuntimeError: ONNX export failed: Couldn't export operator aten::max_pool2d_with_indices

'list out of range' problem in onnx-tf import

Hello,
I have some problem running OnnxTensorflow import.

Running:

import onnx
from onnx_tf.backend import prepare
model = onnx.load('assets/super_resolution.onnx')
tf_rep = prepare(model)

shows:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-1-67e775846ec5> in <module>()
      2 from onnx_tf.backend import prepare
      3 model = onnx.load('assets/super_resolution.onnx')
----> 4 tf_rep = prepare(model)

~/workspace/tutorials/venv/lib/python3.6/site-packages/onnx_tf/backend.py in prepare(cls, model, device, **kwargs)
    346 
    347     predict_net = (cls.onnx_graph_to_tensorflow_net(
--> 348         model.graph, opset=model.opset_import[0].version))
    349 
    350     return TensorflowRep(predict_net)

IndexError: list index out of range

Is it a known issue?

Potentially outdated tutorial: OnnxTensorflowImport.ipynb

In step 3, cell 2, either my model loading is failing silently, or the documentation is outdated. When I run this code, my tf_rep is a different object, and has different attributes. Below is an example script followed by the terminal output.

Script:

import onnx
from onnx_tf.backend import prepare

onnx_path = 'tests/testmodel'

model = onnx.load(onnx_path + '.onnx')
tf_rep = prepare(model)
print(type(tf_rep))
print(dir(tf_rep))

Output:

$ python test_script.py
/home/johnsigmon/programming/ml-sandbox/.env/lib/python3.5/site-packages/onnx-tensorflow/onnx_tf/common/handler_helper.py:71: UserWarning: Fail to get since_version of Expand in domain `` with max_inclusive_version=7. Set to 1.
  handler.ONNX_OP, handler.DOMAIN, version))
/home/johnsigmon/programming/ml-sandbox/.env/lib/python3.5/site-packages/onnx-tensorflow/onnx_tf/common/handler_helper.py:74: UserWarning: Unknown op ConstantLike in domain `ai.onnx`.
  handler.ONNX_OP, handler.DOMAIN or "ai.onnx"))
2018-09-30 12:05:58.367732: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
<class 'onnx_tf.backend_rep.TensorflowRep'>
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_graph', '_inputs', '_outputs', '_tensor_dict', 'export_graph', 'graph', 'inputs', 'outputs', 'run', 'tensor_dict']

Cannot download squeezenet model in Caffe2

Before trying to export caffe2 model to ONNX using this link, we should download the original model. But, I couldn't do it.

$ python -m caffe2.python.models.download squeezenet
Downloading from https://s3.amazonaws.com/caffe2/models/squeezenet/predict_net.pb
Abort: Could not download model. [HTTP Error] 404: Not Found.

Tentatively, I tried to check the URL directly, the following message was shown:

<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>models/squeezenet/predict_net.pb</Key>
<RequestId>D8865372FE30D4BA</RequestId>
<HostId>
6llei6m+tX6CqTPlhpjsYVHIcPj4UED5+ZNP3VMiOWQZHEmGjnG/OvxBMeuxpH26/5Bs4O7I91o=
</HostId>
</Error>

Do we need any other instructions?

onnx_tf.backend.prepare(model) in Tensorflow to ONNX tutorial error: "InvalidArgumentError: Dimensions must be equal"

Python version: 3.5.2
onnx==1.2.1
onnx-tf==1.1.2
tensorflow-gpu==1.8.0
Using tutorial as of this commit.

Following the instructions in the tutorial, I've used this script to train. Worked smoothly.
I froze the model using:

python3 /path/to/site-packages/tensorflow/python/tools/freeze_graph.py \
    --input_graph=/home/ividal/dev/onnx/tutorials/tutorials/graph.proto \
    --input_checkpoint=/home/ividal/dev/onnx/tutorials/tutorials/ckpt/model.ckpt \
    --output_graph=/tmp/frozen_graph.pb \
    --output_node_names=fc2/add \
    --input_binary=True

This produced the expected /tmp/frozen_graph.pb .
The export code in the tutorial provides the expected mnist.onnx file.

model = onnx.load('mnist.onnx') works, but:

tf_rep = prepare(model) yields:

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
~/.venvs/onnx/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1566   try:
-> 1567     c_op = c_api.TF_FinishOperation(op_desc)
   1568   except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimensions must be equal, but are 16 and 64 for 'Add_1' (op: 'Add') with input shapes: [?,64,?,16], [1,1,1,64].

From the error message, I gather the expected channels might be switched (?). However, I did not modify the tutorial code, so it shouldn't be that. Any ideas...?

Thanks!

PytorchCaffe2SuperResolution caffe result dismatch pytorch result

i had tested this demo on my cpu machine, and i modify the model_zoo.load_url(model_url, map_location='cpu') with cpu model load, but the reuslt np.testing.assert_almost_equal(torch_out.data.cpu().numpy(), c2_out, decimal=3) shows:
(mismatch 99.89082872732426%)
x: array([-0.047, 0.17 , 0.84 , ..., 0.204, 0.15 , 0.142], dtype=float32)
y: array([-0.047, 1.371, 1.373, ..., 0.577, 0.355, 0.142], dtype=float32)
I don't know what happen, and how fix it? anyone help?

Update onnx_graph_to_caffe2_net call

The tutorial CorrectnessVerificationAndPerformanceComparison.ipynb has an old call to Caffe2Backend.onnx_graph_to_caffe2_net which references onnx_model.graph instead of just the model onnx_model
Resulting in the error

    init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model.graph, device="CPU")
  File "/opt/DL/pytorch/lib/python2.7/site-packages/caffe2/python/onnx/backend.py", line 923, in onnx_graph_to_caffe2_net
    return cls._onnx_model_to_caffe2_net(model, device=device, opset_version=opset_version, include_initializers=True)
  File "/opt/DL/pytorch/lib/python2.7/site-packages/caffe2/python/onnx/backend.py", line 881, in _onnx_model_to_caffe2_net
    onnx_model = onnx.utils.polish_model(onnx_model)
  File "/home/builder/anaconda2/lib/python2.7/site-packages/onnx/utils.py", line 18, in polish_model
    onnx.checker.check_model(model)
  File "/home/builder/anaconda2/lib/python2.7/site-packages/onnx/checker.py", line 83, in check_model
    C.check_model(model.SerializeToString())
onnx.onnx_cpp2py_export.checker.ValidationError: The model does not have an ir_version set properly.

However, after the fix, we are receiving an error during the benchmark_caffe2_model call.

    caffe2_time = benchmark_caffe2_model(init_net, predict_net)
  File "/opt/DL/pytorch/lib/python2.7/site-packages/caffe2/python/onnx/helper.py", line 92, in benchmark_caffe2_model
    ws.CreateNet(predict_net,True)
  File "/opt/DL/pytorch/lib/python2.7/site-packages/caffe2/python/onnx/workspace.py", line 63, in f
    return getattr(workspace, attr)(*args, **kwargs)
  File "/opt/DL/pytorch/lib/python2.7/site-packages/caffe2/python/workspace.py", line 156, in CreateNet
    StringifyProto(net), overwrite,
  File "/opt/DL/pytorch/lib/python2.7/site-packages/caffe2/python/workspace.py", line 182, in CallWithExceptionIntercept
    return func(*args, **kwargs)
RuntimeError: [enforce fail at operator.cc:46] blob != nullptr. op Conv: Encountered a non-existing input blob: 0

Would I be able to get some assistance in debugging this issue?
Our relevant code levels are pytorch=1.0.0, onnx=1.3.0

If you need more information please let me know!

error with tf-onnx when export in the last step Inference using Backend

After I convert the model to ONNX format using onnx-tensorflow and I can get the correct output format:
input: "Placeholder"
input: "reshape/Reshape/shape"
output: "reshape/Reshape"
op_type: "Reshape"
However when I run in the last step: Inference using Backend
I get the error information:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 16 and 64 for 'Add_1' (op: 'Add') with input shapes: [?,64,?,16], [1,1,1,64].
and the details as follow:
Traceback (most recent call last):
File "test_mnist_using_onnx.py", line 6, in <module>
tf_rep = prepare(model)
File "/root/anaconda3/lib/python3.6/site-packages/onnx_tf/backend.py", line 348, in prepare model.graph, opset=model.opset_import[0].version))
File "/root/anaconda3/lib/python3.6/site-packages/onnx_tf/backend.py", line 324, in onnx_graph_to_tensorflow_net node, tensor_dict, opset=opset)
File "/root/anaconda3/lib/python3.6/site-packages/onnx_tf/backend.py", line 407, in _onnx_node_to_tensorflow_op return method_to_call(node, input_dict)
File "/root/anaconda3/lib/python3.6/site-packages/onnx_tf/backends/backend_v6.py", line 27, in handle_add return TensorflowBackendV1.handle_add(node, input_dict)
File "/root/anaconda3/lib/python3.6/site-packages/onnx_tf/backends/backend_v1.py", line 40, in handle_add return [cls._bin_op(node, input_dict, tf.add)]
File "/root/anaconda3/lib/python3.6/site-packages/onnx_tf/backend.py", line 239, in _bin_op return op_func(x, y)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 297, in add "Add", x=x, y=y, name=name)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op op_def=op_def)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1734, in __init__ control_input_ops)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1570, in _create_c_op raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 16 and 64 for 'Add_1' (op: 'Add') with input shapes: [?,64,?,16], [1,1,1,64].

Error loading the caffe2 model into onnx.

I followed the steps mentioned in the tutorial to convert the a detectron pretrained caffe2 model into ONNX model. Here's the stacktrace of my error:

WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: No module named 'caffe2.python.caffe2_pybind11_state_gpu'
WARNING:caffe2.python.workspace:Original python traceback for operator `102` in network `detectron_e2e_faster_rcnn_R-50-C4_1x_35857197` in exception above (most recent call last):
Traceback (most recent call last):
  File "main.py", line 24, in <module>
    value_info,
  File "/home/ayush99/anaconda3/lib/python3.6/site-packages/caffe2/python/onnx/frontend.py", line 332, in caffe2_net_to_onnx_model
    model = make_model(cls.caffe2_net_to_onnx_graph(*args, **kwargs),
  File "/home/ayush99/anaconda3/lib/python3.6/site-packages/caffe2/python/onnx/frontend.py", line 221, in caffe2_net_to_onnx_graph
    inputs)
  File "/home/ayush99/anaconda3/lib/python3.6/site-packages/caffe2/python/onnx/helper.py", line 62, in c2_native_run_net
    ws.RunNetOnce(predict_net)
  File "/home/ayush99/anaconda3/lib/python3.6/site-packages/caffe2/python/onnx/workspace.py", line 63, in f
    return getattr(workspace, attr)(*args, **kwargs)
  File "/home/ayush99/anaconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 199, in RunNetOnce
    StringifyProto(net),
  File "/home/ayush99/anaconda3/lib/python3.6/site-packages/caffe2/python/workspace.py", line 178, in CallWithExceptionIntercept
    return func(*args, **kwargs)
RuntimeError: [enforce fail at generate_proposals_op.cc:205] im_info_tensor.dims() == (vector<TIndex>{num_images, 3}). 0 vs 1 3 Error from operator: 
input: "rpn_cls_probs_1" input: "rpn_bbox_pred_1" input: "im_info_0" input: "anchor_0" output: "rpn_rois_1" output: "rpn_roi_probs_1" name: "" type: "GenerateProposals" arg { name: "nms_thres" f: 0.7 } arg { name: "min_size" f: 0 } arg { name: "spatial_scale" f: 0.0625 } arg { name: "correct_transform_coords" i: 1 } arg { name: "post_nms_topN" i: 1000 } arg { name: "pre_nms_topN" i: 6000 }

Any idea what could be going wrong here?

Command failed: import onnx_caffe2.frontend

When preparing to export Caffe2 model to ONNX, I execute import onnx_caffe2.frontend in python with the reference, the following error has appeared:

$ python
Python 2.7.11 |Anaconda custom (64-bit)| (default, Jun 15 2016, 15:21:30)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import onnx_caffe2.frontend
[libprotobuf FATAL google/protobuf/stubs/common.cc:61] This program requires version 3.4.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1.  Please update your library.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "google/protobuf/any.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  This program requires version 3.4.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1.  Please update your library.  If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library.  (Version verification failed in "google/protobuf/any.pb.cc".)
Aborted (core dumped)

According to the content as above, I've updated the version of python library protobuf with the command pip install protobuf==3.4.0, but the situation is the same...

How should I do?

Segmentation fault: 11 when running onnx_to_coreml.py

Hi,
I am trying ONNXLive Tutorial.
When I try to convert the ONNX models to CoreML models by running onnx_to_coreml.py script,
I get a segmentation fault : 11 error.

Environment I used :
macOS high sierra + pyhon 2.7

I also tried anaconda python 3.6 but still same error.

Following is the script I am trying to run :

import sys
from onnx import onnx_pb
from onnx_coreml import convert

model_in = sys.argv[1]
model_out = sys.argv[2]

model_file = open(model_in, 'rb')
model_proto = onnx_pb.ModelProto()
model_proto.ParseFromString(model_file.read())
coreml_model = convert(model_proto, image_input_names=['0'], image_output_names=['186'])
coreml_model.save(model_out)

Please suggest me a solution.
Thank you so much in advance.

Error exporting squeezenet in caffe2

I followed the tutorial for exporting a caffe2 model (squeezenet) to ONNX, but got the error below:

Traceback (most recent call last):
File "caffe2toOnnx.py", line 24, in
value_info,
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/onnx/frontend.py", line 589, in caffe2_net_to_onnx_model
model = make_model(cls.caffe2_net_to_onnx_graph(*args, **kwargs),
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/onnx/frontend.py", line 469, in caffe2_net_to_onnx_graph
op, shapes=shapes))
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/onnx/frontend.py", line 384, in caffe2_op_to_onnx_node
nodes = translator(op_def, shapes)
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/onnx/frontend.py", line 221, in _create_conv_pool_op
assert not node.op_type.startswith("Global")
AssertionError

In addition, if I try to export googlenet instead, I get the following error:

Traceback (most recent call last):
File "caffe2toOnnx.py", line 24, in
value_info,
File "/usr/local/lib/python2.7/dist-packages/caffe2/python/onnx/frontend.py", line 591, in caffe2_net_to_onnx_model
producer_name='onnx-caffe2', # producer name
File "/usr/local/lib/python2.7/dist-packages/onnx/helper.py", line 66, in make_model
setattr(model, k, v)
AttributeError: Assignment not allowed (no field "opset_imports" in protocol message object).

Any thoughts on what the problem may be?

Can a PyTorch model be saved, loaded, frozen into TF

I saw that there is a converter for exporting PyTorch models to ONNX format. There also seems to be a converter for TF to import that format. But, in the given example for importing ONNX format into TF, I was confused what methods are available for the tf_prep object assigned intf_rep = prepare(model) . In particular, I wanted to be able to freeze the "model" object using graph_def approach, kind of similar to the following code. How should I change this code to use the model object?

with tf.Session(graph=tf.Graph()) as sess: \
        # We import the meta graph in the current default Graph 
        saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)

        # We restore the weights
        saver.restore(sess, input_checkpoint)

        # We use a built-in TF helper to export variables to constants
        output_graph_def = tf.graph_util.convert_variables_to_constants(
            sess, # The session is used to retrieve the weights
            tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes 
            output_node_names.split(",") # The output node names are used to select the usefull nodes
        ) 

        # Finally we serialize and dump the output graph to the filesystem
        with tf.gfile.GFile(output_graph, "wb") as f:
            f.write(output_graph_def.SerializeToString())
        print("%d ops in the final graph." % len(output_graph_def.node))``` \


Update tutorial assets

See cross-referenced issue here: MicrosoftDocs/windows-dev-docs#414

SqueezeNet input/output names don't adhere to ONNX spec ("All names MUST adhere to C identifier syntax rules") and have issues loading.

Can you please update the assets, or point to the ONNX model zoo instead of having a separate copy?

Thanks!

Exporting models from CNTK to ONNX works only for ResNet20(CIFAR10)

Hello,
I tried the tutorial Exporting models from CNTK to ONNX on the pretrained ResNet models and it works only for ResNet20 trained on CIFAR10. Not on ResNet110 trained on CIFAR10 nor on any model trained on ImageNet (including the models converted from Caffe2).

Am I doing something wrong or the method actually does not work with different models? Here is the code and the output:

convert.py:

import cntk as C
model_path = "ResNet110_CIFAR10_CNTK.model"
z = C.Function.load(model_path)
z.save("model2.onnx", format=C.ModelFormat.ONNX)
Traceback (most recent call last):
  File "convert.py", line 5, in <module>
    z.save("model2.onnx", format=C.ModelFormat.ONNX)
  File "/opt/conda/envs/cntk/lib/python3.4/site-packages/cntk/internal/swig_helper.py", line 69, in wrapper
    result = f(*args, **kwds)
  File "/opt/conda/envs/cntk/lib/python3.4/site-packages/cntk/ops/functions.py", line 1504, in save
    return super(Function, self).save(filename, format.value)
  File "/opt/conda/envs/cntk/lib/python3.4/site-packages/cntk/cntk_py.py", line 2021, in save
    return _cntk_py.Function_save(self, *args)
RuntimeError: Node 'Combine: Output('ce', [], []), Output('errs', [], []), Output('top5Errs', [], []), Output('z', [#, ], [1000]) -> Output('ce', [], []), Output('errs', [], []), Output('top5Errs', [], []), Output('z', [#, ], [1000])': Unsupported node.

[CALL STACK]
[0x7f255d676899]                                                       + 0x857899
[0x7f255d892d61]    CNTK::CNTKToONNXHelper::  CreateNode  (std::shared_ptr<CNTK::Function> const&,  ONNXIR::Graph*,  std::unordered_map<std::shared_ptr<CNTK::Function>,ONNXIR::Node*,std::hash<std::shared_ptr<CNTK::Function>>,std::equal_to<std::shared_ptr<CNTK::Function>>,std::allocator<std::pair<std::shared_ptr<CNTK::Function> const,ONNXIR::Node*>>>&,  std::unordered_map<CNTK::Variable,ONNXIR::Node*,std::hash<CNTK::Variable>,std::equal_to<CNTK::Variable>,std::allocator<std::pair<CNTK::Variable const,ONNXIR::Node*>>>&,  std::unordered_map<CNTK::Variable,CNTK::Variable,std::hash<CNTK::Variable>,std::equal_to<CNTK::Variable>,std::allocator<std::pair<CNTK::Variable const,CNTK::Variable>>> const&) + 0x1461
[0x7f255d8930fb]    CNTK::CNTKToONNXHelper::  Copy  (std::shared_ptr<CNTK::Function> const&,  ONNXIR::Graph*) + 0x17b
[0x7f255d8933d5]    CNTK::CNTKToONNX::  CreateModel  (std::shared_ptr<CNTK::Function> const&) + 0x135
[0x7f255d8aa686]    CNTK::ONNXFormat::  Save  (std::shared_ptr<CNTK::Function> const&,  std::__cxx11::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t>> const&) + 0x36
[0x7f255d67b64d]    CNTK::Function::  Save  (std::__cxx11::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t>> const&,  CNTK::ModelFormat) + 0x3d
[0x7f255e2ffefb]                                                       + 0x1a3efb
[0x7f2580fb8d8e]    PyEval_EvalFrameEx                                 + 0x5ffe
[0x7f2580fbb371]    PyEval_EvalCodeEx                                  + 0x8b1
[0x7f2580fb9d2a]    PyEval_EvalFrameEx                                 + 0x6f9a
[0x7f2580fbb371]    PyEval_EvalCodeEx                                  + 0x8b1
[0x7f2580f19161]                                                       + 0x91161
[0x7f2580ee91f9]    PyObject_Call                                      + 0x59
[0x7f2580fb8766]    PyEval_EvalFrameEx                                 + 0x59d6
[0x7f2580fbb371]    PyEval_EvalCodeEx                                  + 0x8b1
[0x7f2580fb9d2a]    PyEval_EvalFrameEx                                 + 0x6f9a
[0x7f2580fbb371]    PyEval_EvalCodeEx                                  + 0x8b1
[0x7f2580fbb43b]    PyEval_EvalCode                                    + 0x3b
[0x7f2580fdbdb0]    PyRun_FileExFlags                                  + 0x130
[0x7f2580fdee44]    PyRun_SimpleFileExFlags                            + 0x104
[0x7f2580ff9276]    Py_Main                                            + 0xba6
[0x400acd]          main                                               + 0x15d
[0x7f25801b1830]    __libc_start_main                                  + 0xf0
[0x4008a9]```

error when follow OnnxTensorflowImport.ipynb

environment
ubuntu 16.04
tensorflow-gpu 1.8.0
onnx 1.2.2
onnx-tf 1.1.2

import onnx
from onnx_tf.backend import prepare
import  numpy  as  np
from  PIL  import  Image
import os

os.environ["CUDA_VISIBLE_DEVICES"] = "5"

model = onnx.load('super_resolution.onnx')
tf_rep = prepare(model)

The log:

Traceback (most recent call last):
  File "RDAR_onnx_tf.py", line 13, in <module>
    tf_rep = prepare(model)
  File "/root/github/onnx-tensorflow/onnx_tf/backend.py", line 76, in prepare
    return cls.onnx_model_to_tensorflow_rep(model, strict)
  File "/root/github/onnx-tensorflow/onnx_tf/backend.py", line 87, in onnx_model_to_tensorflow_rep
    return cls._onnx_graph_to_tensorflow_rep(model.graph, model.opset_import, strict)
  File "/root/github/onnx-tensorflow/onnx_tf/backend.py", line 141, in _onnx_graph_to_tensorflow_rep
    onnx_node, tensor_dict, handlers, opset=opset, strict=strict)
  File "/root/github/onnx-tensorflow/onnx_tf/backend.py", line 236, in _onnx_node_to_tensorflow_op
    return handler.handle(node, tensor_dict=tensor_dict, strict=strict)
  File "/root/github/onnx-tensorflow/onnx_tf/handlers/handler.py", line 59, in handle
    return ver_handle(node, **kwargs)
  File "/root/github/onnx-tensorflow/onnx_tf/handlers/backend/add.py", line 23, in version_7
    return [cls.make_tensor_from_onnx_node(node, **kwargs)]
  File "/root/github/onnx-tensorflow/onnx_tf/handlers/backend_handler.py", line 111, in make_tensor_from_onnx_node
    return cls._run_tf_func(tf_func, inputs, attrs)
  File "/root/github/onnx-tensorflow/onnx_tf/handlers/backend_handler.py", line 180, in _run_tf_func
    **dict([(p, attrs[p]) for p in params if p in attrs]))
  File "/root/anaconda3/envs/onnx_tf18_py36/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 297, in add
    "Add", x=x, y=y, name=name)
  File "/root/anaconda3/envs/onnx_tf18_py36/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/root/anaconda3/envs/onnx_tf18_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
    op_def=op_def)
  File "/root/anaconda3/envs/onnx_tf18_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1734, in __init__
    control_input_ops)
  File "/root/anaconda3/envs/onnx_tf18_py36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1570, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 224 and 64 for 'Add' (op: 'Add') with input shapes: [1,64,224,224], [64].

The onnx model is downloaded from this link:
[https://s3.amazonaws.com/onnx-mxnet/examples/super_resolution.onnx]

tutorials/tutorials/assets/tf-train-mnist.py not working

getting the following error when running this script

⟩ python tf-train-mnist.py
WARNING:tensorflow:From tf-train-mnist.py:125: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /usr/local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /usr/local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:252: wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please use urllib or similar directly.
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
WARNING:tensorflow:From /usr/local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
WARNING:tensorflow:From /usr/local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /usr/local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: __init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
Saving graph to: /var/folders/xg/lb58443132sb9khl0f8k7m700000gn/T/tmpaitUPD
2018-06-27 11:53:51.868247: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-06-27 11:53:52.129638: E tensorflow/core/common_runtime/executor.cc:660] Executor failed to create kernel. Invalid argument: Default MaxPoolingOp only supports NHWC on device type CPU
	 [[Node: pool1/MaxPool = MaxPool[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 2, 2], padding="SAME", strides=[1, 1, 2, 2], _device="/job:localhost/replica:0/task:0/device:CPU:0"](conv1/Relu)]]
Traceback (most recent call last):
  File "tf-train-mnist.py", line 183, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "tf-train-mnist.py", line 167, in main
    x: batch[0], y_: batch[1]})
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 710, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 5180, in _eval_using_default_session
    return session.run(tensors, feed_dict)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 900, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
    run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
	 [[Node: pool1/MaxPool = MaxPool[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 2, 2], padding="SAME", strides=[1, 1, 2, 2], _device="/job:localhost/replica:0/task:0/device:CPU:0"](conv1/Relu)]]

Caused by op u'pool1/MaxPool', defined at:
  File "tf-train-mnist.py", line 183, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "tf-train-mnist.py", line 131, in main
    y_conv = deepnn(x)
  File "tf-train-mnist.py", line 69, in deepnn
    h_pool1 = max_pool_2x2(h_conv1)
  File "tf-train-mnist.py", line 108, in max_pool_2x2
    strides=[1, 1, 2, 2], padding='SAME', data_format="NCHW")
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 2142, in max_pool
    name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 4604, in max_pool
    data_format=data_format, name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Default MaxPoolingOp only supports NHWC on device type CPU
	 [[Node: pool1/MaxPool = MaxPool[T=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 2, 2], padding="SAME", strides=[1, 1, 2, 2], _device="/job:localhost/replica:0/task:0/device:CPU:0"](conv1/Relu)]]

Black screen on iPhone 5s on iOS 11.4.1

I followed the tutorial but the screen turns black and shows no style transfer via camera. I'm using iOS 11.4.1 on iPhone 5s.
Please help and thanks in advance!

the coreml tutorial can't run with squeezenet.oonx

In the OnnxCoremlImport.ipynb,
when changing the model name into assets/squeezenet.onnx, the convert code will raise
IndexError: list index (2) out of range.
If print(onnx.helper.printable_graph(model.graph))shows the conv layer padding size is 4, however, the pooling padding size is 2.
And in the onnx_coreml code, it would check all padding must be 4.
So, this is the problem.
How to solve this?

Caffe2 to ONNX tutorial assertion fails - re.match('GivenTensor.*Fill', op.type)

I followed the instructions on Caffe2OnnxExport.ipynb
on Ubuntu16.04, ran the following command line:
convert-caffe2-to-onnx /home/eran/Downloads/squeezenet_caffe2/predict_net.pb --caffe2-init-net /home/eran/Downloads/squeezenet_caffe2/exec_net.pb --value-info '{"data": [1, [1, 3, 224, 224]]}' -o sqeezenet.onnx --caffe2-net-name sqeezenet

Got the following error:
Traceback (most recent call last): File "/usr/local/bin/convert-caffe2-to-onnx", line 11, in <module> load_entry_point('onnx-caffe2==1.0.0', 'console_scripts', 'convert-caffe2-to-onnx')() File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/onnx_caffe2/bin/conversion.py", line 61, in caffe2_to_onnx value_info=value_info) File "/usr/local/lib/python2.7/dist-packages/onnx_caffe2/frontend.py", line 517, in caffe2_net_to_onnx_model model = make_model(cls.caffe2_net_to_onnx_graph(*args, **kwargs)) File "/usr/local/lib/python2.7/dist-packages/onnx_caffe2/frontend.py", line 357, in caffe2_net_to_onnx_graph cls._ssa_rewrite(predict_net, init_net, value_info) File "/usr/local/lib/python2.7/dist-packages/onnx_caffe2/frontend.py", line 492, in _ssa_rewrite assert re.match('GivenTensor.*Fill', op.type) AssertionError

@jamesr66a
@bddppq

.pth to .onnx

i have a pytorch model xxxx.pth, but inside there is only parameters, and no the structure of the model.
can it be translate to .onnx file ? how ?
i used the code to load the model:

model=torch.load(model_pth_path)
print type(model)
dummy_input = Variable(torch.randn(1, *input_shape))
output = torch_onnx.export(model,dummy_input,model_onnx_path,verbose=True)
print("Export of torch_model.onnx complete!")

but it report an error:

Traceback (most recent call last):
File "convert_to_onnx.py", line 31, in
output = torch_onnx.export(model,dummy_input,model_onnx_path,verbose=True)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/init.py", line 26, in export
return utils.export(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 94, in export
operator_export_type=operator_export_type)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 226, in _export
example_outputs, propagate)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 177, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 136, in _trace_and_get_graph_from_model
orig_state_dict_keys = _unique_state_dict(model).keys()
File "/usr/local/lib/python2.7/dist-packages/torch/jit/init.py", line 81, in _unique_state_dict
state_dict = module.state_dict(keep_vars=keep_vars)
AttributeError: 'OrderedDict' object has no attribute 'state_dict'

Saving a PyTorch model

I have a custom model that's not very complicated
It's just a combination of Conv2D, MaxPool2D, BatchNorm2D and Dropout2D and one ConvTranspose2D.

I'm trying to save this model to file using the torch.onnx.export() function, but I get this error:

*** RuntimeError: tuple appears in op that does not forward tuples (VisitNode at ../torch/csrc/jit/passes/lower_tuples.cpp:72)

Any ideas are appreciated.

Thanks,
S

X/Y Tensor shapes

Is there a way we can print the X/Y tensor shapes on the graph? I can see the shapes for a few constants/bias/weights are visible in the dropdown on the right side, however for the operator blocks, it would be nice to have the shapes of inputs and outputs printed.

Function save doesn't work when Exporting models from CNTK to ONNX

I'm using CNTK 2.5.1 (python and c#) and neither of them accept ModelFormat on Function.Save.

I'm getting the error:

Traceback (most recent call last):
File "cntkexport.py", line 7, in
z.save("frcn_svm.onnx", format=C.ModelFormat.ONNX)
AttributeError: module 'cntk' has no attribute 'ModelFormat'

RuntimeError: ONNX export failed: Couldn't export operator aten::max_pool2d

env: ubuntu16 and latest verion onnx
code: PytorchOnnxExport.py
dummy_input = Variable(torch.randn(1, 3, 224, 224))
model = torchvision.models.squeezenet1_1(pretrained=True)
torch.onnx.export(model, dummy_input, "squeezenet.onnx", export_params=True)

Error:
Traceback (most recent call last):
File "PytorchOnnxExport.py", line 20, in
torch.onnx.export(model, dummy_input, "squeezenet.onnx", export_params=True)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/init.py", line 12, in export
return utils.export(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 83, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names)
File "/usr/local/lib/python2.7/dist-packages/torch/onnx/utils.py", line 150, in _export
proto = trace.export(list(_unique_state_dict(model).values()), _onnx_opset_version)
RuntimeError: ONNX export failed: Couldn't export operator aten::max_pool2d

why?? Is there not support??

ValidationError: Input index 3 must be set to consumed for operator BatchNormalization

I have downloaded the dpn.onnx model (dpn stands for Dual Path Network) which was converted from pytorch model trained in server (because my local computer doesn't have any GPU, and the reason why I downloaded it from server is that the server was not allowed to connect to the Ineternet and so there was no tensorflow installed, contrary to my local computer), however I found that when I run:
'''''''''''''''''''''''
import onnx
from onnx_tf.backend import prepare
model = onnx.load('dpn.onnx')
tf_rep = prepare(model)
'''''''''''''''''''''''
an issue happened:
image
Can anybody tell me why? Maybe because of the different machine?

WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.

When I run the following example in 'OnnxCaffe2Import.ipynb', the following error warning appears. I installed caffe2 in the cpu version of the anaconda environment. What should I do? Thanks!!

import onnx
import caffe2.python.onnx.backend
# Prepare the inputs, here we use numpy to generate some random inputs for demo purpose
import numpy as np
img = np.random.randn(1, 3, 224, 224).astype(np.float32)
# Load the ONNX model
model = onnx.load('assets/squeezenet.onnx')
# Run the ONNX model with Caffe2
outputs = caffe2.python.onnx.backend.run_model(model, [img])

WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: libcurand.so.8.0: cannot open shared object file: No such file or directory
Segmentation fault (core dumped)

encounter error when run OnnxCoremlImport.ipynb

run code step by step as OnnxCoremlImport.ipynb , however, it gets error when execute
cml = onnx_coreml.convert(model), the error shows that list index out of range in line 347 in onnx_coreml/_transformers.py, is onnx_coreml not support PixelShuffle?

PyTorch to Caffe2 and Mobile using ONNX Tutorial Segmentation Fault

Hello,

I am trying to run Transfering a model from PyTorch to Caffe2 and Mobile using ONNX tutorial. However, I get segmentation fault error.

I think the segmentation error is triggered by the second part of the tutorial where loading ONNX model into Caffe2. While searching for what might cause this error I noticed that this tutorial has 2 different versions: one in Onnx Github Tutorial and the other in PyTorch website. The onnx to caffe2 import for the two tutorials were different for the second part along with the implementation. One uses : import caffe2.python.onnx.backend and other import onnx_caffe2.backend. I tried with both tutorial but I get the same segmentation fault in both. Also, when I comment the onnx to caffe2 tutorial and the related import lines, the first part Pytorch to ONNX seems to work without any error.

Below, I will describe how I create my virtual env and the results from gdb after getting the Segmentation Fault:

Dependencies:
cuda/8.0
cudnn/v6.0
opencv/3.4.1
nccl/2.0.5
caffe2/2018-03-02

#Virtual env creation:
--mem 6G virtualenv --system-site-packages ~/workspace/p_c2_onnx/pc2_onnx 
 source ~/workspace/p_c2_onnx/pc2_onnx/bin/activate

#install and check caffe2
pip install -r /usr/local/opt/caffe2-2018-03-02/requirements.txt 
srun --mem 2G --gres=gpu:1 python -c 'from caffe2.python import core'

#install pytorch and  torchvision
pip install http://download.pytorch.org/whl/cu80/torch-0.3.1-cp27-cp27mu-linux_x86_64.whl 
pip install torchvision 

#install onnx and check
--mem 6G pip install onnx
--mem 6G --gres=gpu:1 python -c "import onnx"

#(The below is used only for the tutorial given in PyTorch)
#install onnx-caffe2
pip install onnx-caffe2

To run the code I used the following command (with 6GB RAM, 1 GPU with 6GB):
--mem 6GB --gres=gpu:1,gmem:6G python SuperResolution.py

Also, I upgraded the numpy and pyyaml versioons with:

pip install numpy --upgrade
pip install pyyaml --upgrade

Otherwise, I get:
error: RuntimeError: module compiled against API version 0xa but this version of numpy is 0x9

After upgrading the numpy version, when I run the code I get the Segmentation Fault:

Exported model has been executed on Caffe2 backend, and the result looks good!
srun: error: c6: task 0: Segmentation fault

I tried to run the code in gdb as well to see where exactly the fault occurs:

(gdb) run SuperResolution_onnxgit.py 
Starting program: /imatge/gcamli/workspace/p_c2_onnx/onnx_2/bin/python SuperResolution_onnxgit.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff4033700 (LWP 32594)]
[New Thread 0x7ffff3832700 (LWP 32595)]
[New Thread 0x7ffff1031700 (LWP 32596)]
[New Thread 0x7fffee830700 (LWP 32597)]
[New Thread 0x7fffec02f700 (LWP 32598)]
[New Thread 0x7fffeb82e700 (LWP 32599)]
[New Thread 0x7fffeb02d700 (LWP 32600)]
[New Thread 0x7fffea82c700 (LWP 32601)]
[New Thread 0x7fffe802b700 (LWP 32602)]
[New Thread 0x7fffe382a700 (LWP 32603)]
[New Thread 0x7fffe1029700 (LWP 32604)]
[New Thread 0x7fffde828700 (LWP 32605)]
[New Thread 0x7fffdc027700 (LWP 32606)]
[New Thread 0x7fffd9826700 (LWP 32607)]
[New Thread 0x7fffd7025700 (LWP 32608)]
[New Thread 0x7fffd0824700 (LWP 32611)]
[New Thread 0x7fffce023700 (LWP 32614)]
[New Thread 0x7fffcb822700 (LWP 32617)]
[New Thread 0x7fffc9021700 (LWP 32618)]
[New Thread 0x7fffc6820700 (LWP 32621)]
[New Thread 0x7fffc401f700 (LWP 32623)]
[New Thread 0x7fffc181e700 (LWP 32624)]
[New Thread 0x7fffbf01d700 (LWP 32625)]
[New Thread 0x7fffbc81c700 (LWP 32626)]
[New Thread 0x7fffba01b700 (LWP 32627)]
[New Thread 0x7fffb781a700 (LWP 32628)]
[New Thread 0x7fffb5019700 (LWP 32629)]
[New Thread 0x7fffb2818700 (LWP 32630)]
[New Thread 0x7fffb0017700 (LWP 32631)]
[New Thread 0x7fffad816700 (LWP 32632)]
[New Thread 0x7fffab015700 (LWP 32633)]
[Thread 0x7ffff4033700 (LWP 32594) exited]
[Thread 0x7ffff3832700 (LWP 32595) exited]
[Thread 0x7ffff1031700 (LWP 32596) exited]
[Thread 0x7fffee830700 (LWP 32597) exited]
[Thread 0x7fffec02f700 (LWP 32598) exited]
[Thread 0x7fffeb82e700 (LWP 32599) exited]
[Thread 0x7fffeb02d700 (LWP 32600) exited]
[Thread 0x7fffea82c700 (LWP 32601) exited]
[Thread 0x7fffe802b700 (LWP 32602) exited]
[Thread 0x7fffe382a700 (LWP 32603) exited]
[Thread 0x7fffe1029700 (LWP 32604) exited]
[Thread 0x7fffde828700 (LWP 32605) exited]
[Thread 0x7fffdc027700 (LWP 32606) exited]
[Thread 0x7fffd9826700 (LWP 32607) exited]
[Thread 0x7fffd7025700 (LWP 32608) exited]
[Thread 0x7fffd0824700 (LWP 32611) exited]
[Thread 0x7fffce023700 (LWP 32614) exited]
[Thread 0x7fffcb822700 (LWP 32617) exited]
[Thread 0x7fffc9021700 (LWP 32618) exited]
[Thread 0x7fffc6820700 (LWP 32621) exited]
[Thread 0x7fffc401f700 (LWP 32623) exited]
[Thread 0x7fffc181e700 (LWP 32624) exited]
[Thread 0x7fffbf01d700 (LWP 32625) exited]
[Thread 0x7fffbc81c700 (LWP 32626) exited]
[Thread 0x7fffba01b700 (LWP 32627) exited]
[Thread 0x7fffb781a700 (LWP 32628) exited]
[Thread 0x7fffb5019700 (LWP 32629) exited]
[Thread 0x7fffb2818700 (LWP 32630) exited]
[Thread 0x7fffab015700 (LWP 32633) exited]
[Thread 0x7fffb0017700 (LWP 32631) exited]
[Thread 0x7fffad816700 (LWP 32632) exited]
[New Thread 0x7fffab015700 (LWP 32668)]
[New Thread 0x7fffad816700 (LWP 32671)]
[New Thread 0x7fffb0017700 (LWP 32672)]
[New Thread 0x7fffb2818700 (LWP 32682)]
Exported model has been executed on Caffe2 backend, and the result looks good!

Program received signal SIGSEGV, Segmentation fault.
0x00000000004a6fb0 in ?? ()
(gdb) where
#0  0x00000000004a6fb0 in ?? ()
#1  0x000000000041a703 in ?? ()
#2  0x00000000004a6047 in ?? ()
#3  0x0000000000515d00 in PyGC_Collect ()
#4  0x0000000000513c1f in Py_Finalize ()
#5  0x0000000000498099 in Py_Main ()
#6  0x00007ffff6f12b45 in __libc_start_main (main=0x497c60 <main>, 
    argc=2, argv=0x7fffffffc258, init=<optimized out>, 
    fini=<optimized out>, rtld_fini=<optimized out>, 
    stack_end=0x7fffffffc248) at libc-start.c:287
#7  0x0000000000497b8b in _start ()
(gdb) 

In both tutorial, the segmantation fault occurs in the same place. I don't know how to fix this error. I didn't change anything in the code. I am not sure why I am getting this error.

I really appreciate if you can help me to fix it.

Note: I also opened an issue on PyTorch repo in order to cross-reference the issue since I noticed that I get the same error in PyTorch Tutorial as well.

RuntimeError: ONNX export failed: Couldn't export operator aten::adaptive_avg_pool2d

Hi I've tried to export PyTorch model to ONNX as following:
input_shape = (3, 224, 224)
model_onnx_path = "torch_model.onnx"
dummy_input = Variable(torch.randn(1, *input_shape).cuda())
output = torch.onnx.export(model, dummy_input, model_onnx_path, verbose=False)

And I got this error:
UserWarning: ONNX export failed on ATen operator adaptive_avg_pool2d because torch.onnx.symbolic.adaptive_avg_pool2d does not exist
RuntimeError: ONNX export failed: Couldn't export operator aten::adaptive_avg_pool2d

How I can fixed it? Thank you

unrecognized arguments: --export_onnx ./saved_models/mosaic.onnx

Hi, @ezyang , @kant , @msakai , @zdevito , @Yangqing

I met this error when converting .pth to .onnx

(mun) kerb:fast_neural_style mun$
(mun) kerb:fast_neural_style mun$ python ./neural_style/neural_style.py eval --content-image dummy.jpg --output-image dummy-out.jpg --model ./saved_models/mosaic.pth --cuda 0 --export_onnx ./saved_models/mosaic.onnx
usage: neural_style.py [-h] {train,eval} ...
neural_style.py: error: unrecognized arguments: --export_onnx ./saved_models/mosaic.onnx
(mun) kerb:fast_neural_style mun$

What's wrong with me?
Env: MacMini, OSX Mojave, python2.7

Thanks.

from @bemoregt.

Running ONNX models from Caffe2 doesn't work for any example models

I attempted to execute the steps shown in the Caffe2 tutorial entitled Run an ONNX model with Caffe2: https://render.githubusercontent.com/view/ipynb?commit=c04c7d721da9b1fa1dd0fe5d5551086c61877586&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f6f6e6e782f7475746f7269616c732f633034633764373231646139623166613164643066653564353535313038366336313837373538362f7475746f7269616c732f4f6e6e78436166666532496d706f72742e6970796e62&nwo=onnx%2Ftutorials&path=tutorials%2FOnnxCaffe2Import.ipynb&repository_id=110873948&repository_type=Repository#Run-an-ONNX-model-with-Caffe2

With the squeezenet model it throws the error:

warning: onnx optimizer is unable to parse input model. (The IR version of the ONNX model may be too old.)

So, I also tried every model from the example models repo: https://github.com/onnx/models and none of them worked. Most of them were learned in CNTK and there appears to be a range of issues with running models learned in CNTK from Caffe2.

Caffe2 version is 0.8.dev.2018.04.16.
OS is Debian Stretch.

I understand that this is more of a Caffe2 issue to file. Just also filing it here to let you, the ONNX developers know as well.

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

查看风格迁移模型,我们看到输入名为’0’,输出名为’186’。这些只是PyTorch分配的数字ID。我们需要将它们标记为图像。
如何用数字标记图像?我的写法如下:
coreml_model = convert(model_proto, image_input_names=['0'], image_output_names=['156'])
但是报错(标题)。

Segmentation fault while trying to run "torch.onnx.export"

I was running into segmentation fault issue while trying to export onnx model. and the component we use is Python 3.5.4 , torch-0.4.1, Ubuntu 14.04

>>> from torch.autograd import Variable
>>> import torch.onnx
>>> import torchvision
>>> 
>>> # Standard ImageNet input - 3 channels, 224x224,
... # values don't matter as we care about network structure.
... # But they can also be real inputs.
... dummy_input = Variable(torch.randn(1, 3, 224, 224))
>>> # Obtain your model, it can be also constructed in your script explicitly
... model = torchvision.models.alexnet(pretrained=True)
>>> # Invoke export
... 
>>> torch.onnx.export(model, dummy_input, "alexnet.onnx")
Segmentation fault (core dumped)

Is ResNet50_ImageNet_CNTK model supported for onnx format?

Is ResNet50_ImageNet_CNTK model supported for onnx format?

I couldn´t export :
"RuntimeError: Node 'Combine: Output('ce', [], []), Output('errs', [], []), Output('top5Errs', [], []), Output('z', [#, ], [1000]) -> Output('ce', [], []), Output('errs', [], []), Output('top5Errs', [], []), Output('z', [#, ], [1000])': Unsupported node."

If no, when do you plan to support?

Thks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.