Giter Site home page Giter Site logo

Comments (32)

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024 5

I was able to fix the implementation issues and convert the model to ONNX: https://github.com/ibaiGorordo/ONNX-CREStereo-Depth-Estimation. From there it should be easier to convert to other platforms. Here is a video with the output in ONNX: https://youtu.be/ciX7ILgpJtw

crestereo

@sunmooncode regarding the low speed, if you use a low resolution (320x240) and only do one pass without flow_init, you can get decent speed with good quality.

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024 4

@sunmooncode
I have slightly modified the inference code in the repository provided by ibaiGorordo to make ONNX exportable logic.
https://github.com/ibaiGorordo/CREStereo-Pytorch

It is important to note that it takes 30 minutes to an hour to output a single onnx file.

    device = 'cpu'
    model = Model(max_disp=256, mixed_precision=False, test_mode=True)
    model.load_state_dict(torch.load(model_path), strict=True)
    model.to(device)
    model.eval()

    import onnx
    from onnxsim import simplify
    RESOLUTION = [
        [240//2,320//2],
        [320//2,480//2],
        [360//2,640//2],
        [480//2,640//2],
        [720//2,1280//2],
        #[240,320],
        #[320,480],
        #[360,640],
        #[480,640],
        #[720,1280],
    ]
    ITER=20
    MODE='init'
    MODEL = f'crestereo_{MODE}_iter{ITER}'

    for H, W in RESOLUTION:

        if MODE == 'init':
            onnx_file = f"{MODEL}_{H}x{W}.onnx"
            x1 = torch.randn(1, 3, H, W).cpu()
            x2 = torch.randn(1, 3, H, W).cpu()
            torch.onnx.export(
                model,
                args=(x1,x2),
                f=onnx_file,
                opset_version=12,
                input_names = ['left','right'],
                output_names=['output'],
            )
            model_onnx1 = onnx.load(onnx_file)
            model_onnx1 = onnx.shape_inference.infer_shapes(model_onnx1)
            onnx.save(model_onnx1, onnx_file)

            model_onnx2 = onnx.load(onnx_file)
            model_simp, check = simplify(model_onnx2)
            onnx.save(model_simp, onnx_file)

        elif MODE == 'next':
            onnx_file = f"{MODEL}_{H}x{W}.onnx"
            x1 = torch.randn(1, 3, H, W).cpu()
            x2 = torch.randn(1, 3, H, W).cpu()
            x3 = torch.randn(1, 2, H//2, W//2).cpu()
            torch.onnx.export(
                model,
                args=(x1,x2,x3),
                f=onnx_file,
                opset_version=12,
                input_names = ['left','right','flow_init'],
                output_names=['output'],
            )
            model_onnx1 = onnx.load(onnx_file)
            model_onnx1 = onnx.shape_inference.infer_shapes(model_onnx1)
            onnx.save(model_onnx1, onnx_file)

            model_onnx2 = onnx.load(onnx_file)
            model_simp, check = simplify(model_onnx2)
            onnx.save(model_simp, onnx_file)

    import sys
    sys.exit(0)

Next, this script
https://github.com/PINTO0309/PINTO_model_zoo/blob/main/284_CREStereo/onnx_merge.py

Alternatively, merging onnx into a single graph using this tool is optional and feasible.
https://github.com/PINTO0309/snc4onnx

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024 3

I have committed a large number of ONNX models of various resolution and ITER combinations. I imagine the ITER10 version is twice as fast.

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/284_CREStereo

  • init - ITER 2,5,10,20
    This model generates flow_init.

  • next - ITER 2,5,10,20
    This model takes as input a total of three images: flow_init generated by the init model and two LEFT/RIGHT images.

  • combined - ITER 2,5,10,20
    Two levels of inference, init and next, are merged into a single onnx.

image

from crestereo.

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024 1

@sunmooncode So far I was able to convert to traced module using the following code:

import numpy as np
import megengine.functional as F
from megengine import jit
import megengine.traced_module as tm
import megengine as mge
from nets import Model

print(mge.__version__)

data1 = mge.tensor(np.random.random([1, 3, 480, 640]).astype(np.float32))
data2 = mge.tensor(np.random.random([1, 3, 480, 640]).astype(np.float32))

pretrained_dict = mge.load("crestereo_eth3d.mge")
model = Model(max_disp=256, mixed_precision=False, test_mode=True)
model.load_state_dict(pretrained_dict["state_dict"], strict=True)
model.freeze_bn()

output = model(data1,data2)
print(output)

traced_model = tm.trace_module(model, data1, data2)
traced_model.eval()

mge.save(traced_model, "traced_model.tm")

Running it will give you an error, but you can fix it by commenting this line:

assert right_crop.shape == left_feature.shape

My idea is to see if the tracedmodule_to_onnx function in the mgeconverter will work. However, when I try to convert the model, I get the following error:

tm_to_onnx.py", line 51, in tracedmodule_to_onnx
    traced_module = mge.load(traced_module)
  File "/usr/local/lib/python3.7/dist-packages/megengine/serialization.py", line 107, in load
    return load(fin, map_location=map_location, pickle_module=pickle_module)
  File "/usr/local/lib/python3.7/dist-packages/megengine/serialization.py", line 112, in load
    return pickle_module.load(f)
ValueError: unsupported pickle protocol: 5

For some reason in that machine, loading the model does not work, but in another machine I have, I am able to do traced_module = mge.load(traced_module) without any issue. However in that machine I am not able to install mgeconverter . So I will keep trying tomorrow.

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024 1

@ibaiGorordo thanks for your help!
When I use mgeconvert's convert tracedmodule_to_onnx -i tracedmodule.tm -o out.onnx I get the following error:

Traceback (most recent call last):
  File "/home/zt/.local/bin/convert", line 525, in <module>
    main()
  File "/home/zt/.local/bin/convert", line 518, in main
    args.func(args)
  File "/home/zt/.local/bin/convert", line 280, in convert_func
    opset=args.opset,
  File "/home/zt/.local/lib/python3.6/site-packages/mgeconvert/converters/tm_to_onnx.py", line 61, in tracedmodule_to_onnx
    irgraph = tm_resolver.resolve()
  File "/home/zt/.local/lib/python3.6/site-packages/mgeconvert/frontend/tm_to_ir/tm_frontend.py", line 71, in resolve
    self.get_all_oprs()
  File "/home/zt/.local/lib/python3.6/site-packages/mgeconvert/frontend/tm_to_ir/tm_frontend.py", line 103, in get_all_oprs
    assert op_gen_cls, "METHOD {} is not supported.".format(expr.method)
AssertionError: METHOD __rmul__ is not supported.

The reason is that many OPs of CREstereo do not have corresponding operator support for mgeconverter.

In addition:
When I use the script to evaluate the real-time performance of CREstereo, my computer is an RTX1050ti, and the result is less than 1 Fps. However, under the same conditions, RAFTstereo has nearly 3 Fps, and the realtime model has more than 6 Fps. So CRestereo is too big for me.

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024 1

I get segmentation fault even if I build megengine and megconvert for my environment.
Installing MgeConvert CREStereo - Zenn - my article

By the way, I have already confirmed that CREStereo works in the CPU environment on which it was built. I have given up on exporting to ONNX because MegEngine will segfault no matter what workaround I try.
image

${HOME}/.local/bin/convert mge_to_onnx \
-i crestereo_eth3d.mge \
-o crestereo_eth3d.onnx \
--opset 11

/home/user/.local/lib/python3.8/site-packages/megengine/core/tensor/megbrain_graph.py:508: ResourceWarning: unclosed file <_io.BufferedReader name='crestereo_eth3d.mge'>
  buf = open(fpath, "rb").read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
  File "/home/user/.local/bin/convert", line 525, in <module>
    main()
  File "/home/user/.local/bin/convert", line 518, in main
    args.func(args)
  File "/home/user/.local/bin/convert", line 283, in convert_func
    converter_map[target](
  File "/home/user/.local/lib/python3.8/site-packages/mgeconvert/converters/mge_to_onnx.py", line 50, in mge_to_onnx
    irgraph = MGE_FrontEnd(mge_fpath, outspec=outspec).resolve()
  File "/home/user/.local/lib/python3.8/site-packages/mgeconvert/frontend/mge_to_ir/mge_frontend.py", line 21, in __init__
    _, outputs = load_comp_graph_from_file(model_path)
  File "/home/user/.local/lib/python3.8/site-packages/mgeconvert/frontend/mge_to_ir/mge_utils.py", line 106, in load_comp_graph_from_file
    ret = G.load_graph(path)
  File "/home/user/.local/lib/python3.8/site-packages/megengine/core/tensor/megbrain_graph.py", line 511, in load_graph
    cg, metadata = _imperative_rt.load_graph(buf, output_vars_map, output_vars_list)
RuntimeError: access invalid Maybe value

backtrace:
/home/user/.local/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN3mgb13MegBrainErrorC1ERKSs+0x4a) [0x7f3b39dfe1fa]
/home/user/.local/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN3mgb17metahelper_detail27on_maybe_invalid_val_accessEv+0x34) [0x7f3b39f060f4]
/home/user/.local/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x14c605) [0x7f3b94873605]
/home/user/.local/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x14c823) [0x7f3b94873823]
/home/user/.local/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x11d62e) [0x7f3b9484462e]
/usr/bin/python3(PyCFunction_Call+0x59) [0x5f5e79]
/usr/bin/python3(_PyObject_MakeTpCall+0x296) [0x5f6a46]
/usr/bin/python3(_PyEval_EvalFrameDefault+0x5d3f) [0x570a1f]
/usr/bin/python3(_PyFunction_Vectorcall+0x1b6) [0x5f6226]
/usr/bin/python3(_PyEval_EvalFrameDefault+0x5706) [0x5703e6]

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024 1

@ibaiGorordo

$ pip install pickle5
  • /usr/local/lib/python3.7/dist-packages/megengine/serialization.py - line 112
import pickle5

return pickle5.load(f)

from crestereo.

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024 1

Commenting these two lines fixes the rmul error:

image1 = 2 * (image1 / 255.0) - 1.0
image2 = 2 * (image2 / 255.0) - 1.0

However, next I get the following error:
AssertionError: Module <class 'megengine.module.normalization.InstanceNorm'> is not supported.

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024 1

That is because megconvert does not support this operation. I have seen that many in the network do not support it, so it should not be converted.

from crestereo.

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024 1

Since it seems to be hard to convert, I have tried to implement the model in Pytorch. The model seems to run normally, but since I don't have the weights I cannot fully test it.

My hope is that somehow we can translate the weights from this model there.

https://github.com/ibaiGorordo/CREStereo-Pytorch

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024 1

@sunmooncode
No, I am not. Try it first. ibaiGorordo solves all problems.
image
image

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024

我尝试用一下代码转换mge模型出现错误: Segmentation fault (core dumped)

data1 = mge.Tensor(np.random.random([1, 3, 384, 512]).astype(np.float32)), 
data2 = mge.Tensor(np.random.random([1, 3, 384, 512]).astype(np.float32))

pretrained_dict = mge.load("model/crestereo_eth3d.mge",map_location="cpu")
model = CREStereo(max_disp=256, mixed_precision=False, test_mode=True)

model.load_state_dict(pretrained_dict["state_dict"], strict=True)

model.eval()
@jit.trace(symbolic=True, capture_as_const=True)
def infer_func(a,b, *, model):
    pred = model(a,b)
    return pred
infer_func(data1,data2,model=model)
infer_func.dump("./test.mge", arg_names=["left_image","right_image"],output_name=["disp"])

from crestereo.

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024

Hi @PINTO0309 thanks for the tip. @sunmooncode I got the same error after fixing my issue.

Also I have create a Google Colab notebook to reproduce the error:
https://colab.research.google.com/drive/1IMibaByKwiAIam8UAvyI-2U_Z9O7rBWS?usp=sharing

Because of the CUDA version, it crashed if I load the model with a runtime with GPU. So, run it without GPU.

from crestereo.

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024

Yeah... I tried converting the model to .mge (using @trace(symbolic=False, capture_as_const=True), but using mge_to_onnx outputs the following error: AssertionError: OP PowC is not supported (I updated the Google Colab notebook).

I do not understand the framework enough, but seems to be hard to fix the issues

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024

@ibaiGorordo Good job
You can train a model with a small step count and run through that model to see if it converts. Secondly, pytorch to onnx cannot contain dynamic structures. The structure needs to be instantiated.

from crestereo.

ibaiGorordo avatar ibaiGorordo commented on June 8, 2024

@sunmooncode I was able to convert the weights directly, however, it seems that there is some parameter in my Pytorch implementation that is probably not correct. But overall the conversion seems to work.

Haven't tried converting it to other frameworks yet.
output

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024

@PINTO0309 @ibaiGorordo Thanks for your help!
I would like to know how you converted these models, can you provide their conversion scripts, it would be very helpful to me.

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024

@PINTO0309 I have a problem that onnx has a hard time handling logical operators, but there are a lot of "if" constructs in the model. Does this have any effect on the transformation of the model?

from crestereo.

Tord-Zhang avatar Tord-Zhang commented on June 8, 2024

I get segmentation fault even if I build megengine and megconvert for my environment. Installing MgeConvert CREStereo - Zenn - my article

By the way, I have already confirmed that CREStereo works in the CPU environment on which it was built. I have given up on exporting to ONNX because MegEngine will segfault no matter what workaround I try. image

${HOME}/.local/bin/convert mge_to_onnx \
-i crestereo_eth3d.mge \
-o crestereo_eth3d.onnx \
--opset 11

/home/user/.local/lib/python3.8/site-packages/megengine/core/tensor/megbrain_graph.py:508: ResourceWarning: unclosed file <_io.BufferedReader name='crestereo_eth3d.mge'>
  buf = open(fpath, "rb").read()
ResourceWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
  File "/home/user/.local/bin/convert", line 525, in <module>
    main()
  File "/home/user/.local/bin/convert", line 518, in main
    args.func(args)
  File "/home/user/.local/bin/convert", line 283, in convert_func
    converter_map[target](
  File "/home/user/.local/lib/python3.8/site-packages/mgeconvert/converters/mge_to_onnx.py", line 50, in mge_to_onnx
    irgraph = MGE_FrontEnd(mge_fpath, outspec=outspec).resolve()
  File "/home/user/.local/lib/python3.8/site-packages/mgeconvert/frontend/mge_to_ir/mge_frontend.py", line 21, in __init__
    _, outputs = load_comp_graph_from_file(model_path)
  File "/home/user/.local/lib/python3.8/site-packages/mgeconvert/frontend/mge_to_ir/mge_utils.py", line 106, in load_comp_graph_from_file
    ret = G.load_graph(path)
  File "/home/user/.local/lib/python3.8/site-packages/megengine/core/tensor/megbrain_graph.py", line 511, in load_graph
    cg, metadata = _imperative_rt.load_graph(buf, output_vars_map, output_vars_list)
RuntimeError: access invalid Maybe value

backtrace:
/home/user/.local/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN3mgb13MegBrainErrorC1ERKSs+0x4a) [0x7f3b39dfe1fa]
/home/user/.local/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN3mgb17metahelper_detail27on_maybe_invalid_val_accessEv+0x34) [0x7f3b39f060f4]
/home/user/.local/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x14c605) [0x7f3b94873605]
/home/user/.local/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x14c823) [0x7f3b94873823]
/home/user/.local/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x11d62e) [0x7f3b9484462e]
/usr/bin/python3(PyCFunction_Call+0x59) [0x5f5e79]
/usr/bin/python3(_PyObject_MakeTpCall+0x296) [0x5f6a46]
/usr/bin/python3(_PyEval_EvalFrameDefault+0x5d3f) [0x570a1f]
/usr/bin/python3(_PyFunction_Vectorcall+0x1b6) [0x5f6226]
/usr/bin/python3(_PyEval_EvalFrameDefault+0x5706) [0x5703e6]

@PINTO0309 hi, this image is from dataset Holopix 50K? I cannot get such good results with the same image. How did you do the pre-rectification?

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024

I think you can use any image you like.

from crestereo.

Tord-Zhang avatar Tord-Zhang commented on June 8, 2024

I think you can use any image you like.

test

this is the result I get with test.py

from crestereo.

Tord-Zhang avatar Tord-Zhang commented on June 8, 2024

@PINTO0309 Hi, the result I get with test.py is much worse than the image you provided. Is there anything I need to do before input the image to test.py?

from crestereo.

Tord-Zhang avatar Tord-Zhang commented on June 8, 2024

I think you can use any image you like.

test

this is the result I get with test.py

@ibaiGorordo any sugestions?

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024

@Tord-Zhang 你使用的是什么模型

from crestereo.

Tord-Zhang avatar Tord-Zhang commented on June 8, 2024

@Tord-Zhang 你使用的是什么模型

作者提供的模型哈。问题不在模型上,我是用的是holopix50k的原始数据,没有经过严格的极线校正。作者提供的两张图片是校正过的,所以我得到的结果比较差。不过作者没有回应关于极线校正的细节

from crestereo.

sunmooncode avatar sunmooncode commented on June 8, 2024

@Tord-Zhang 你使用的是什么模型

作者提供的模型哈。问题不在模型上,我是用的是holopix50k的原始数据,没有经过严格的极线校正。作者提供的两张图片是校正过的,所以我得到的结果比较差。不过作者没有回应关于极线校正的细节

极限校正的话可以用opencv或者MATLAB 你试过自己校正的数据或者Middlebury嘛 看看结果是否正确

from crestereo.

liujiaxing7 avatar liujiaxing7 commented on June 8, 2024

@PINTO0309 Hello, have you tried to deploy on other platforms? The combined onnx reports an error when converting MNN: Can't convert Einsum for input size=3
Convert Onnx's Op init_ onnx::Mul_ 2590 , type = Einsum, failed, may be some node is not const

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024
  1. CREStereo's OAK-D optimization validation https://zenn.dev/pinto0309/scraps/475e4f2a641d22
  2. You could extrapolate Squeeze just before Einsum to remove the batch size, run Einsum, and then restore the batch size with Unsqueeze after Einsum is run.

If 3D doesn't work, just make it 2D. Try it yourself.

from crestereo.

hgchen avatar hgchen commented on June 8, 2024

#3 (comment)
Hi @PINTO0309 Thanks a lot for the great work! I have a question here. Your CREStereo model zoo contains two types of ONNX files. One is ONNX, and the other is TensorRT (also in .onnx format). Do you mind sharing how the TensorRT ONNX files were generated? Thanks!

from crestereo.

PINTO0309 avatar PINTO0309 commented on June 8, 2024

The topic is too old and I no longer have the resources at hand from that time. However, a comparison will immediately reveal the difference. You will soon see what errors you get when you run it on TensorRT.

$ ssc4onnx -if crestereo_combined_iter2_240x320.onnx

┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ OP Type                ┃ OPs        ┃┃ OP Type                ┃ OPs        ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩┡━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Add                    │ 268        ││ Add                    │ 268        │
│ AveragePool            │ 8          ││ AveragePool            │ 8          │
│ Cast                   │ 120        ││ Cast                   │ 120        │
│ Concat                 │ 68         ││ Concat                 │ 68         │
│ Conv                   │ 126        ││ Conv                   │ 126        │
│ Div                    │ 64         ││ Div                    │ 64         │
│ Einsum                 │ 12         ││ Einsum                 │ 16         │
│ Elu                    │ 8          ││ Elu                    │ 8          │
│ Expand                 │ 56         ││ Expand                 │ 56         │
│ Floor                  │ 24         ││ Floor                  │ 24         │
│ Gather                 │ 32         ││ Gather                 │ 32         │
│ GatherElements         │ 48         ││ GatherElements         │ 48         │
│ Greater                │ 48         ││ Greater                │ 48         │
│ InstanceNormalization  │ 38         ││ InstanceNormalization  │ 38         │
│ Less                   │ 48         ││ Less                   │ 48         │
│ MatMul                 │ 24         ││ MatMul                 │ 24         │
│ Mul                    │ 415        ││ Mul                    │ 415        │
│ Neg                    │ 2          ││ Neg                    │ 2          │
│ Pad                    │ 32         ││ Pad                    │ 32         │
│ Pow                    │ 8          ││ Pow                    │ 8          │
│ Reciprocal             │ 4          ││ Reciprocal             │ 4          │
│ ReduceMean             │ 168        ││ ReduceMean             │ 168        │
│ ReduceSum              │ 8          ││ ReduceSum              │ 8          │
│ Relu                   │ 82         ││ Relu                   │ 82         │
│ Reshape                │ 114        ││ Reshape                │ 114        │
│ Resize                 │ 3          ││ Resize                 │ 3          │
│ Sigmoid                │ 26         ││ Sigmoid                │ 26         │
│ Slice                  │ 288        ││ Slice                  │ 288        │
│ Softmax                │ 4          ││ Softmax                │ 4          │
│ Split                  │ 36         ││ Split                  │ 36         │
│ Sqrt                   │ 8          ││ Sqrt                   │ 8          │
│ Sub                    │ 146        ││ Sub                    │ 146        │
│ Tanh                   │ 14         ││ Tanh                   │ 14         │
│ Transpose              │ 30         ││ Transpose              │ 30         │
│ Unsqueeze              │ 116        ││ Unsqueeze              │ 116        │
│ Where                  │ 96         ││ Where                  │ 96         │
│ ---------------------- │ ---------- ││ ---------------------- │ ---------- │
│ Total number of OPs    │ 2592       ││ Total number of OPs    │ 2596       │
│ ====================== │ ========== ││ ====================== │ ========== │
│ Model Size             │ 21.3MiB    ││ Model Size             │ 21.3MiB    │
└────────────────────────┴────────────┘└────────────────────────┴────────────┘

image

from crestereo.

hgchen avatar hgchen commented on June 8, 2024

The topic is too old and I no longer have the resources at hand from that time. However, a comparison will immediately reveal the difference. You will soon see what errors you get when you run it on TensorRT.

$ ssc4onnx -if crestereo_combined_iter2_240x320.onnx

┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓┏━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ OP Type                ┃ OPs        ┃┃ OP Type                ┃ OPs        ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩┡━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ Add                    │ 268        ││ Add                    │ 268        │
│ AveragePool            │ 8          ││ AveragePool            │ 8          │
│ Cast                   │ 120        ││ Cast                   │ 120        │
│ Concat                 │ 68         ││ Concat                 │ 68         │
│ Conv                   │ 126        ││ Conv                   │ 126        │
│ Div                    │ 64         ││ Div                    │ 64         │
│ Einsum                 │ 12         ││ Einsum                 │ 16         │
│ Elu                    │ 8          ││ Elu                    │ 8          │
│ Expand                 │ 56         ││ Expand                 │ 56         │
│ Floor                  │ 24         ││ Floor                  │ 24         │
│ Gather                 │ 32         ││ Gather                 │ 32         │
│ GatherElements         │ 48         ││ GatherElements         │ 48         │
│ Greater                │ 48         ││ Greater                │ 48         │
│ InstanceNormalization  │ 38         ││ InstanceNormalization  │ 38         │
│ Less                   │ 48         ││ Less                   │ 48         │
│ MatMul                 │ 24         ││ MatMul                 │ 24         │
│ Mul                    │ 415        ││ Mul                    │ 415        │
│ Neg                    │ 2          ││ Neg                    │ 2          │
│ Pad                    │ 32         ││ Pad                    │ 32         │
│ Pow                    │ 8          ││ Pow                    │ 8          │
│ Reciprocal             │ 4          ││ Reciprocal             │ 4          │
│ ReduceMean             │ 168        ││ ReduceMean             │ 168        │
│ ReduceSum              │ 8          ││ ReduceSum              │ 8          │
│ Relu                   │ 82         ││ Relu                   │ 82         │
│ Reshape                │ 114        ││ Reshape                │ 114        │
│ Resize                 │ 3          ││ Resize                 │ 3          │
│ Sigmoid                │ 26         ││ Sigmoid                │ 26         │
│ Slice                  │ 288        ││ Slice                  │ 288        │
│ Softmax                │ 4          ││ Softmax                │ 4          │
│ Split                  │ 36         ││ Split                  │ 36         │
│ Sqrt                   │ 8          ││ Sqrt                   │ 8          │
│ Sub                    │ 146        ││ Sub                    │ 146        │
│ Tanh                   │ 14         ││ Tanh                   │ 14         │
│ Transpose              │ 30         ││ Transpose              │ 30         │
│ Unsqueeze              │ 116        ││ Unsqueeze              │ 116        │
│ Where                  │ 96         ││ Where                  │ 96         │
│ ---------------------- │ ---------- ││ ---------------------- │ ---------- │
│ Total number of OPs    │ 2592       ││ Total number of OPs    │ 2596       │
│ ====================== │ ========== ││ ====================== │ ========== │
│ Model Size             │ 21.3MiB    ││ Model Size             │ 21.3MiB    │
└────────────────────────┴────────────┘└────────────────────────┴────────────┘

image

Got it. Thanks a lot for the prompt reply!

from crestereo.

onmoontree avatar onmoontree commented on June 8, 2024

@sunmooncode
您好,我现在正在研究如何导出crestereo模型的.onnx文件,在使用https://github.com/ibaiGorordo/CREStereo-Pytorch/blob/main/convert_to_onnx.py的方法后,模型的推理精度显著下降了,想请教您是否也是用这个代码来生成.onnx文件的,如果是的话,推理精度是否下降了呢

from crestereo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.