Giter Site home page Giter Site logo

nbigaouette / onnxruntime-rs Goto Github PK

View Code? Open in Web Editor NEW
265.0 12.0 99.0 582 KB

Rust wrapper for Microsoft's ONNX Runtime (version 1.8)

License: Apache License 2.0

Rust 99.96% Python 0.03% C 0.01%
onnxruntime onnx rust onnx-runtime onnxruntime-sys inference rust-wrapper

onnxruntime-rs's Introduction

ONNX Runtime

github crates.io docs.rs build status codecov

This is an attempt at a Rust wrapper for Microsoft's ONNX Runtime (version 1.8).

This project consist on two crates:

Changelog

The build.rs script supports downloading pre-built versions of the Microsoft ONNX Runtime, which provides the following targets:

CPU:

  • Linux x86_64
  • macOS x86_64
  • macOS aarch64 (no pre-built binaries, no CI testing, see #74)
  • Windows i686
  • Windows x86_64

GPU:

  • Linux x86_64
  • Windows x86_64

WARNING:

  • This is an experiment and work in progress; it is not complete/working/safe. Help welcome!
  • Basic inference works, see onnxruntime/examples/sample.rs or onnxruntime/tests/integration_tests.rs
  • ONNX Runtime has many options to control the inference process but those options are not yet exposed.
  • This was developed and tested on macOS Catalina. Other platforms should work but have not been tested.

Setup

Three different strategy to obtain the ONNX Runtime are supported by the build.rs script:

  1. Download a pre-built binary from upstream;
  2. Point to a local version already installed;
  3. Compile from source (not yet implemented).

To select which strategy to use, set the ORT_STRATEGY environment variable to:

  1. download: This is the default if ORT_STRATEGY is not set;
  2. system: To use a locally installed version (use ORT_LIB_LOCATION environment variable to point to the install path)
  3. compile: To compile the library

The download strategy supports downloading a version of ONNX that supports CUDA. To use this, set the environment variable ORT_USE_CUDA=1 (only supports Linux or Windows).

Until the build script allow compilation of the runtime, see the compilation notes for some details on the process.

Note on 'ORT_STRATEGY=system'

When using ORT_STRATEGY=system, executing a built crate binary (for example the tests) might fail, at least on macOS, if the library is not installed in a system path. An error similar to the following happens:

dyld: Library not loaded: @rpath/libonnxruntime.1.7.1.dylib
  Referenced from: onnxruntime-rs.git/target/debug/deps/onnxruntime_sys-22eb0e3e89a0278c
  Reason: image not found

To fix, one can either:

  • Set the LD_LIBRARY_PATH environment variable to point to the path where the library can be found.

  • Adapt the .cargo/config file to contain a linker flag to provide the full path:

    [target.aarch64-apple-darwin]
    rustflags = ["-C", "link-args=-Wl,-rpath,/full/path/to/onnxruntime/lib"]

See rust-lang/cargo #5077 for more information.

Example

The C++ example that uses the C API (C_Api_Sample.cpp) was ported to both the low level crate (onnxruntime-sys) and the high level on (onnxruntime).

onnxruntime-sys

To run this example (onnxruntime-sys/examples/c_api_sample.rs):

# Download the model (SqueezeNet 1.0, ONNX version: 1.3, Opset version: 8)
❯ curl -LO "https://github.com/onnx/models/raw/master/vision/classification/squeezenet/model/squeezenet1.0-8.onnx"
❯ cargo run --example c_api_sample
[...]
    Finished dev [unoptimized + debuginfo] target(s) in 1.88s
     Running `target/debug/examples/c_api_sample`
Using Onnxruntime C API
2020-08-09 09:37:41.554922 [I:onnxruntime:, inference_session.cc:174 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2020-08-09 09:37:41.556650 [I:onnxruntime:, inference_session.cc:830 Initialize] Initializing session.
2020-08-09 09:37:41.556665 [I:onnxruntime:, inference_session.cc:848 Initialize] Adding default CPU execution provider.
2020-08-09 09:37:41.556678 [I:onnxruntime:test, bfc_arena.cc:15 BFCArena] Creating BFCArena for Cpu
2020-08-09 09:37:41.556687 [V:onnxruntime:test, bfc_arena.cc:32 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2020-08-09 09:37:41.558313 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-09 09:37:41.559327 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-09 09:37:41.559476 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-09 09:37:41.559607 [V:onnxruntime:, inference_session.cc:671 TransformGraph] Node placements
2020-08-09 09:37:41.559615 [V:onnxruntime:, inference_session.cc:673 TransformGraph] All nodes have been placed on [CPUExecutionProvider].
2020-08-09 09:37:41.559639 [I:onnxruntime:, session_state.cc:25 SetGraph] SaveMLValueNameIndexMapping
2020-08-09 09:37:41.559787 [I:onnxruntime:, session_state.cc:70 SetGraph] Done saving OrtValue mappings.
2020-08-09 09:37:41.560252 [I:onnxruntime:, session_state_initializer.cc:178 SaveInitializedTensors] Saving initialized tensors.
2020-08-09 09:37:41.563467 [I:onnxruntime:, session_state_initializer.cc:223 SaveInitializedTensors] Done saving initialized tensors
2020-08-09 09:37:41.563979 [I:onnxruntime:, inference_session.cc:919 Initialize] Session successfully initialized.
Number of inputs = 1
Input 0 : name=data_0
Input 0 : type=1
Input 0 : num_dims=4
Input 0 : dim 0=1
Input 0 : dim 1=3
Input 0 : dim 2=224
Input 0 : dim 3=224
2020-08-09 09:37:41.573127 [I:onnxruntime:, sequential_executor.cc:145 Execute] Begin execution
2020-08-09 09:37:41.573183 [I:onnxruntime:test, bfc_arena.cc:259 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:13 rounded_bytes:3154176
2020-08-09 09:37:41.573197 [I:onnxruntime:test, bfc_arena.cc:143 Extend] Extended allocation by 4194304 bytes.
2020-08-09 09:37:41.573203 [I:onnxruntime:test, bfc_arena.cc:147 Extend] Total allocated bytes: 9137152
2020-08-09 09:37:41.573212 [I:onnxruntime:test, bfc_arena.cc:150 Extend] Allocated memory at 0x7fb7d6cb7000 to 0x7fb7d70b7000
2020-08-09 09:37:41.573248 [I:onnxruntime:test, bfc_arena.cc:259 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:8 rounded_bytes:65536
2020-08-09 09:37:41.573256 [I:onnxruntime:test, bfc_arena.cc:143 Extend] Extended allocation by 4194304 bytes.
2020-08-09 09:37:41.573262 [I:onnxruntime:test, bfc_arena.cc:147 Extend] Total allocated bytes: 13331456
2020-08-09 09:37:41.573268 [I:onnxruntime:test, bfc_arena.cc:150 Extend] Allocated memory at 0x7fb7d70b7000 to 0x7fb7d74b7000
Score for class [0] =  0.000045440644
Score for class [1] =  0.0038458651
Score for class [2] =  0.00012494653
Score for class [3] =  0.0011804523
Score for class [4] =  0.0013169361
Done!

onnxruntime

To run this example (onnxruntime/examples/sample.rs):

# Download the model (SqueezeNet 1.0, ONNX version: 1.3, Opset version: 8)
❯ curl -LO "https://github.com/onnx/models/raw/master/vision/classification/squeezenet/model/squeezenet1.0-8.onnx"
❯ cargo run --example sample
[...]
    Finished dev [unoptimized + debuginfo] target(s) in 13.62s
     Running `target/debug/examples/sample`
Uninitialized environment found, initializing it with name "test".
2020-08-09 09:34:37.395577 [I:onnxruntime:, inference_session.cc:174 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2020-08-09 09:34:37.399253 [I:onnxruntime:, inference_session.cc:830 Initialize] Initializing session.
2020-08-09 09:34:37.399284 [I:onnxruntime:, inference_session.cc:848 Initialize] Adding default CPU execution provider.
2020-08-09 09:34:37.399313 [I:onnxruntime:test, bfc_arena.cc:15 BFCArena] Creating BFCArena for Cpu
2020-08-09 09:34:37.399335 [V:onnxruntime:test, bfc_arena.cc:32 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2020-08-09 09:34:37.410516 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-09 09:34:37.417478 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-09 09:34:37.420131 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-09 09:34:37.422623 [V:onnxruntime:, inference_session.cc:671 TransformGraph] Node placements
2020-08-09 09:34:37.428863 [V:onnxruntime:, inference_session.cc:673 TransformGraph] All nodes have been placed on [CPUExecutionProvider].
2020-08-09 09:34:37.428954 [I:onnxruntime:, session_state.cc:25 SetGraph] SaveMLValueNameIndexMapping
2020-08-09 09:34:37.429079 [I:onnxruntime:, session_state.cc:70 SetGraph] Done saving OrtValue mappings.
2020-08-09 09:34:37.429925 [I:onnxruntime:, session_state_initializer.cc:178 SaveInitializedTensors] Saving initialized tensors.
2020-08-09 09:34:37.436300 [I:onnxruntime:, session_state_initializer.cc:223 SaveInitializedTensors] Done saving initialized tensors
2020-08-09 09:34:37.437255 [I:onnxruntime:, inference_session.cc:919 Initialize] Session successfully initialized.
Dropping the session options.
2020-08-09 09:34:37.448956 [I:onnxruntime:, sequential_executor.cc:145 Execute] Begin execution
2020-08-09 09:34:37.449041 [I:onnxruntime:test, bfc_arena.cc:259 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:13 rounded_bytes:3154176
2020-08-09 09:34:37.449072 [I:onnxruntime:test, bfc_arena.cc:143 Extend] Extended allocation by 4194304 bytes.
2020-08-09 09:34:37.449087 [I:onnxruntime:test, bfc_arena.cc:147 Extend] Total allocated bytes: 9137152
2020-08-09 09:34:37.449104 [I:onnxruntime:test, bfc_arena.cc:150 Extend] Allocated memory at 0x7fb3b9585000 to 0x7fb3b9985000
2020-08-09 09:34:37.449176 [I:onnxruntime:test, bfc_arena.cc:259 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:8 rounded_bytes:65536
2020-08-09 09:34:37.449196 [I:onnxruntime:test, bfc_arena.cc:143 Extend] Extended allocation by 4194304 bytes.
2020-08-09 09:34:37.449209 [I:onnxruntime:test, bfc_arena.cc:147 Extend] Total allocated bytes: 13331456
2020-08-09 09:34:37.449222 [I:onnxruntime:test, bfc_arena.cc:150 Extend] Allocated memory at 0x7fb3b9985000 to 0x7fb3b9d85000
Dropping Tensor.
Score for class [0] =  0.000045440578
Score for class [1] =  0.0038458686
Score for class [2] =  0.0001249467
Score for class [3] =  0.0011804511
Score for class [4] =  0.00131694
Dropping TensorFromOrt.
Dropping the session.
Dropping the memory information.
Dropping the environment.

See also the integration tests (onnxruntime/tests/integration_tests.rs) that performs simple model download and inference, validating the results.

Bindings Generation

Bindings (the basis of onnxruntime-sys) are committed to the git repository. This means bindgen is not a dependency anymore on every build (it was made optional) and thus build times are better.

To generate new bindings (for example if they don't exists for your platform or if a version bump occurred), build the crate with the generate-bindings feature.

NOTE: Make sure to have the rustfmt rustup component present so that bindings are formatted:

rustup component add rustfmt

Then on each platform build with the proper feature flag:

cd onnxruntime-sys
❯ cargo build --features generate-bindings

Generating Bindings for Linux With Docker

Prepare the container:

❯ docker run -it --rm --name rustbuilder -v "$PWD":/usr/src/myapp -w /usr/src/myapp rust:1.50.0 /bin/bash
❯ apt-get update
❯ apt-get install clang
❯ rustup component add rustfmt

Generate the bindings:

❯ docker exec -it --user "$(id -u)":"$(id -g)" rustbuilder /bin/bash
❯ cd onnxruntime-sys
❯ cargo build --features generate-bindings

Generating Bindings for Windows With Vagrant

You can use nbigaouette/windows_vagrant_rust to provision a Windows VM that can build the project and generate the bindings.

Windows can build both x86 and x86_64 bindings:

❯ rustup target add i686-pc-windows-msvc x86_64-pc-windows-msvc
❯ cd onnxruntime-sys
❯ cargo build --features generate-bindings --target i686-pc-windows-msvc
❯ cargo build --features generate-bindings --target x86_64-pc-windows-msvc

Conduct

The Rust Code of Conduct shall be respected. For escalation or moderation issues please contact Nicolas ([email protected]) instead of the Rust moderation team.

License

This project is licensed under either of

at your option.

onnxruntime-rs's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnxruntime-rs's Issues

Should ndarray types be re-exported?

Right now the tensor types come from ndarray. This means it needs to be added to the list of dependencies in Cargo.toml in addition to onnxruntime.

Since it is part of the API anyway, re-exporting ndarray should simplify usage a bit.

RUSTSEC-2020-0159: Potential segfault in `localtime_r` invocations

Potential segfault in localtime_r invocations

Details
Package chrono
Version 0.4.19
URL chronotope/chrono#499
Date 2020-11-10

Impact

Unix-like operating systems may segfault due to dereferencing a dangling pointer in specific circumstances. This requires an environment variable to be set in a different thread than the affected functions. This may occur without the user's knowledge, notably in a third-party library.

Workarounds

No workarounds are known.

References

See advisory page for additional details.

run with heterogeneous input types

The interface of session::run uses a Vec for the input vectors. Is there a possibility to run a model where inputs have different types?

onnxruntime-extensions support

Following the onnxruntime-extensions build script, I am able to use some community custom operations as sentencepiece_tokenizer.

In order to register those operations, I must call first the EnableOrtCustomOps functions, present at the main c and c++ APIs.
This call generates an OrtSessionOptions object which is used to invoke a new session containing the custom operations.

The current rust crate is missing those 2 calls: EnableOrtCustomOps and a SessionBuilder which can be fed with a OrtSessionOptions object.

For visibility purposes, the code below shows the implementation of the first call:

fn enable_custom_operations() -> *mut OrtSessionOptions {
    let g_ort = unsafe { OrtGetApiBase().as_ref().unwrap().GetApi.unwrap()(ORT_API_VERSION) };
    assert_ne!(g_ort, std::ptr::null_mut());
    let mut session_options_ptr: *mut OrtSessionOptions = std::ptr::null_mut();
    let status = unsafe { g_ort.as_ref().unwrap().CreateSessionOptions.unwrap()(&mut session_options_ptr) };
    check_status(g_ort, status).unwrap();
    let status = unsafe { g_ort.as_ref().unwrap().EnableOrtCustomOps.unwrap()(session_options_ptr) };
    check_status(g_ort, status).unwrap();
    session_options_ptr
}

And the next one the session building process:

    let session_options_ptr = enable_custom_operations();
    let model_bytes = std::fs::read("model.onnx").unwrap();
    let environment = Environment::builder()
    .with_name("test")
    .with_log_level(LoggingLevel::Verbose)
    .build()?;

    let mut session = SessionBuilder {
        env: &environment,
        session_options_ptr,
        allocator: AllocatorType::Arena,
        memory_type: MemType::Default,
    }.with_optimization_level(GraphOptimizationLevel::Basic)?
    .with_number_threads(1)?
    .with_model_from_memory(model_bytes)?;

Investigate static linking

Dynamic linking of libonnxruntime.so is annoying. If binaries are not run from cargo, they cannot find it (rpath issue). This makes running examples through a debugger difficult.

Static linking might help in that regard. Not sure if upstream supports it though.

An idea - a `Runner` that owns the input and output arrays and freezes shapes and names

Bottom line - there's tons of overhead in run() currently:

  • It allocates something like 15 Vec instances and a bunch of strings, there's tons of allocations all over the place (so for small inputs and graphs this is noticeable)
  • For big inputs, you are currently required to copy the data in
  • There's a lot of overhead like building names vecs (should be done upon model load?) and shapes (if there's no dynamic axes, no need to do that repeatedly)
  • There's allocations for outputs as well

Here's one idea, what if you could do something like this (I think this way you could bring the overhead down to almost zero).

// maybe I've missed something, would like to hear your thoughts, @nbigaouette :)

// note that this is all simplified, as it may require e.g. Pin<> in a few places
struct Runner {
    session: Session,
    inputs: Vec<Array<...>>>,
    // owned preallocated outputs as well?
    input_names: Vec<CString>,
    output_names: Vec<CString>,
}

impl Runner {
    fn from_session(session: Session) -> Self { ... }

    pub fn execute(&mut self) { ... }

    pub fn outputs(&self) -> &[Array<...>] { ... }

    pub fn inputs(&mut self) -> &mut [Array<...>] { ... }
}

let mut Session: session = ...;
let input_arrays: Vec<...> = ...;

// this executes most of what `run()` currently does, all the way up to the actual .Run() call
let runner = session.into_runner(input_arrays);

runner.execute()?; // this just calls Run() and converts the status

// if outputs are preallocated, no extra allocations here either
for out in runner.outputs() {
    dbg!(out);
}

// no allocations, no boilerplate, we're just updating the inputs
runner.inputs()[0].fill(42.0);

// no allocations, no boilerplate, just a .Run() call
runner.execute()?;

Android support

I am trying to generate android bindings chertov@4be4f7f
the solution is simple, but i don't know how it will work with different ORT_STRATEGY
i would like to use NNAPI acceleration

Setup CI

Setup GitHub Actions for CI.

Depends on #6.

RUSTSEC-2021-0080: Links in archive can create arbitrary directories

Links in archive can create arbitrary directories

Details
Package tar
Version 0.4.35
URL alexcrichton/tar-rs#238
Date 2021-07-19

When unpacking a tarball that contains a symlink the tar crate may create
directories outside of the directory it's supposed to unpack into.

The function errors when it's trying to create a file, but the folders are
already created at this point.

use std::{io, io::Result};
use tar::{Archive, Builder, EntryType, Header};

fn main() -&gt; Result&lt;()&gt; {
    let mut buf = Vec::new();

    {
        let mut builder = Builder::new(&amp;mut buf);

        // symlink: parent -&gt; ..
        let mut header = Header::new_gnu();
        header.set_path(&quot;symlink&quot;)?;
        header.set_link_name(&quot;..&quot;)?;
        header.set_entry_type(EntryType::Symlink);
        header.set_size(0);
        header.set_cksum();
        builder.append(&amp;header, io::empty())?;

        // file: symlink/exploit/foo/bar
        let mut header = Header::new_gnu();
        header.set_path(&quot;symlink/exploit/foo/bar&quot;)?;
        header.set_size(0);
        header.set_cksum();
        builder.append(&amp;header, io::empty())?;

        builder.finish()?;
    };

    Archive::new(&amp;*buf).unpack(&quot;demo&quot;)
}

This issue was discovered and reported by Martin Michaelis (@mgjm).

See advisory page for additional details.

Segmentation fault with GetTensorMutableData

Using GetTensorMutableData to read back resulting tensor segfaults:

❯ cargo run --example c_api_sample
    Finished dev [unoptimized + debuginfo] target(s) in 0.08s
     Running `target/debug/examples/c_api_sample`
g_ort: 0x10a60e5d8
Using Onnxruntime C API
2020-08-04 10:11:10.866050 [I:onnxruntime:, inference_session.cc:174 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2020-08-04 10:11:10.874242 [I:onnxruntime:, inference_session.cc:830 Initialize] Initializing session.
2020-08-04 10:11:10.874281 [I:onnxruntime:, inference_session.cc:848 Initialize] Adding default CPU execution provider.
2020-08-04 10:11:10.874334 [I:onnxruntime:test, bfc_arena.cc:15 BFCArena] Creating BFCArena for Cpu
2020-08-04 10:11:10.874356 [V:onnxruntime:test, bfc_arena.cc:32 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2020-08-04 10:11:10.881963 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-04 10:11:10.886945 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-04 10:11:10.888130 [I:onnxruntime:, reshape_fusion.cc:37 ApplyImpl] Total fused reshape node count: 0
2020-08-04 10:11:10.889268 [V:onnxruntime:, inference_session.cc:671 TransformGraph] Node placements
2020-08-04 10:11:10.889318 [V:onnxruntime:, inference_session.cc:673 TransformGraph] All nodes have been placed on [CPUExecutionProvider].
2020-08-04 10:11:10.889525 [I:onnxruntime:, session_state.cc:25 SetGraph] SaveMLValueNameIndexMapping
2020-08-04 10:11:10.889922 [I:onnxruntime:, session_state.cc:70 SetGraph] Done saving OrtValue mappings.
2020-08-04 10:11:10.893268 [I:onnxruntime:, session_state_initializer.cc:178 SaveInitializedTensors] Saving initialized tensors.
2020-08-04 10:11:10.896629 [I:onnxruntime:, session_state_initializer.cc:223 SaveInitializedTensors] Done saving initialized tensors
2020-08-04 10:11:10.899166 [I:onnxruntime:, inference_session.cc:919 Initialize] Session successfully initialized.
Number of inputs = 1
Input 0 : name=data_0
Input 0 : type=1
Input 0 : num_dims=4
Input 0 : dim 0=1
Input 0 : dim 1=3
Input 0 : dim 2=224
Input 0 : dim 3=224
input_node_dims: [1, 3, 224, 224]
input_node_names[0]:  "data_0"
output_node_names[0]: "softmaxout_1"
2020-08-04 10:11:10.908541 [I:onnxruntime:, sequential_executor.cc:145 Execute] Begin execution
2020-08-04 10:11:10.908632 [I:onnxruntime:test, bfc_arena.cc:259 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:13 rounded_bytes:3154176
2020-08-04 10:11:10.908676 [I:onnxruntime:test, bfc_arena.cc:143 Extend] Extended allocation by 4194304 bytes.
2020-08-04 10:11:10.908690 [I:onnxruntime:test, bfc_arena.cc:147 Extend] Total allocated bytes: 9137152
2020-08-04 10:11:10.908705 [I:onnxruntime:test, bfc_arena.cc:150 Extend] Allocated memory at 0x7fa051c00000 to 0x7fa052000000
2020-08-04 10:11:10.908789 [I:onnxruntime:test, bfc_arena.cc:259 AllocateRawInternal] Extending BFCArena for Cpu. bin_num:8 rounded_bytes:65536
2020-08-04 10:11:10.908809 [I:onnxruntime:test, bfc_arena.cc:143 Extend] Extended allocation by 4194304 bytes.
2020-08-04 10:11:10.908822 [I:onnxruntime:test, bfc_arena.cc:147 Extend] Total allocated bytes: 13331456
2020-08-04 10:11:10.908834 [I:onnxruntime:test, bfc_arena.cc:150 Extend] Allocated memory at 0x7fa0554b7000 to 0x7fa0558b7000
thread 'main' panicked at 'assertion failed: (unsafe { *floatarr } - 0.000045).abs() < 1e-6', onnxruntime-sys/examples/c_api_sample.rs:413:5

See: https://github.com/nbigaouette/onnxruntime-rs/blob/43abc4abb6c992cb0eb22629cbf7b4c4/onnxruntime-sys/examples/c_api_sample.rs#L408

Onnx runtime fail to run on windows

When you try to run project containing onnxruntime (0.0.12) it fails to launch because downloaded dll can't be found.

It exits with following error.

error: process didn't exit successfully: `target\debug\onnx_test.exe` (exit code: 0xc000007b)

I don't actually know what im doing wrong here. Rust code itself does not matter because it fails before entering the main function.

To run the program you have to manually copy downloaded dll from within build directory. Rust generates random names for output directory each clean rebuild so its not possible to automate copying to run program in docker. Its kinda unusual from user perspective.

I'm gladly will receive any guidance on that error if there's any known solutions.

RUSTSEC-2021-0139: ansi_term is Unmaintained

ansi_term is Unmaintained

Details
Status unmaintained
Package ansi_term
Version 0.12.1
URL ogham/rust-ansi-term#72
Date 2021-08-18

The maintainer has adviced this crate is deprecated and will not
receive any maintenance.

The crate does not seem to have much dependencies and may or may not be ok to use as-is.

Last release seems to have been three years ago.

Possible Alternative(s)

The below list has not been vetted in any way and may or may not contain alternatives;

See advisory page for additional details.

Replace environment Arc<Mutex> with upstream

In microsoft/onnxruntime#4583 the environment created from the C API is made threadsafe; creating multiple environments from different threads should return the singleton, including the log manager.

The merge commit (f0edd074fb6a957b7c40a5135af61654043ed1c2) is not yet released (1.4.0 is the latest for now).

When a new release is done, using it might simplify the lazy_static used here.

Bindings float precision

It looks like somewhere these bindings are adding error to float output.

When using native C++ onnxruntime API in the output for yolov3 onnx model I receive

Using python and native C++ onnxruntime API:

[[[     4.5295      4.3708      8.5376       11.65  2.2829e-05     0.98627]
  [     11.613      3.9123      22.815      9.5377  2.1607e-05     0.98834]
  [     19.874      4.0934      34.348      8.3545  1.0371e-05     0.98514]
  ...

Using onnxruntime-rs API:

[[[4.4851866, 4.155593, 8.946828, 12.582257, 0.0000130140525, 0.986418],
  [11.476209, 3.759369, 23.092682, 9.8777075, 0.000009529847, 0.9887239],
  [19.693684, 3.971115, 34.107414, 8.499779, 0.0000042610786, 0.9855219],
...

These results are approximately the same, but you can see the errors, which are pretty huge!

I even tried to compile onnxruntime with different backends and enable them with modifying this crate.

Tried download and system onnxruntimes, with and without GPU, with custom backend, etc.

Status of this project

Hi there,

I would be interested to know what the status of this project is. Are there any plans to further develop this library? For example by reviewing/accepting PRs?

Add logging

Better logging should be added, either using log or tracing.

Rust flags are needed for compiling and using on linux

This target is specifically for OSX, but if you're trying to build onnx and run with this lib on Linux, it seems like it needs to be there as well.

I can push up a PR for this, but I don't know if you want to add it for Linux or remove the target. Both fixed the issues I had using a compiled onnx with my rust project, but I'm not familiar enough yet with the impact it might have for other platforms you're targeting.

onnxruntime-sys build script failure

Hi there, I was trying to compile my project which uses onnxruntime but the following error was given:

error: failed to run custom build command for `onnxruntime-sys v0.0.2`

Caused by:
  process didn't exit successfully: `/.../target/debug/build/onnxruntime-sys-1abee8f24c70e3db/build-script-build` (exit status: 101)
  --- stderr
  thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: NotPresent', /.../.cargo/registry/src/github.com-1ecc6299db9ec823/onnxruntime-sys-0.0.2/build.rs:4:80
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I tried looking in build.rs but couldn't find anything on line 4 but I may be wrong.

Should input/output names be precached?

Why do this every time run() is called, would it be possible to do it upon load?

let input_names: Vec<String> = self.inputs.iter().map(|input| input.name.clone()).collect();
let input_names_cstring: Vec<CString> = input_names
.iter()
.cloned()
.map(|n| CString::new(n).unwrap())
.collect();
let input_names_ptr: Vec<*const i8> = input_names_cstring
.into_iter()
.map(|n| n.into_raw() as *const i8)
.collect();
let output_names: Vec<String> = self
.outputs
.iter()
.map(|output| output.name.clone())
.collect();
let output_names_cstring: Vec<CString> = output_names
.into_iter()
.map(|n| CString::new(n).unwrap())
.collect();
let output_names_ptr: Vec<*const i8> = output_names_cstring
.iter()
.map(|n| n.as_ptr() as *const i8)
.collect();

Assert Failed when Initializing Environment

I've tested it successfully on my computer but after I moved it to another machine with WinServer 2022 I got the following error.

DEBUG: onnxruntime::environment: Environment not yet initialized, creating a new one.
thread 'main' panicked at 'assertion failed: `(left != right)`
  left: `0x0`,
 right: `0x0`', C:\Users\Administrator\.cargo\registry\src\github.com-1ecc6299db9ec823\onnxruntime-0.0.14\src\lib.rs:180:5
stack backtrace:
   0: std::panicking::begin_panic_handler
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library\std\src\panicking.rs:575
   1: core::panicking::panic_fmt
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library\core\src\panicking.rs:64
   2: core::fmt::Arguments::new_v1
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library\core\src\fmt\mod.rs:398
   3: core::panicking::assert_failed_inner
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library\core\src\panicking.rs:245
   4: core::panicking::assert_failed<ptr_mut$<onnxruntime_sys::OrtApi>,ptr_mut$<onnxruntime_sys::OrtApi> >
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483\library\core\src\panicking.rs:199
   5: onnxruntime::g_ort
             at C:\Users\Administrator\.cargo\registry\src\github.com-1ecc6299db9ec823\onnxruntime-0.0.14\src\lib.rs:180
   6: onnxruntime::environment::Environment::new
             at C:\Users\Administrator\.cargo\registry\src\github.com-1ecc6299db9ec823\onnxruntime-0.0.14\src\environment.rs:105
   7: onnxruntime::environment::EnvBuilder::build
             at C:\Users\Administrator\.cargo\registry\src\github.com-1ecc6299db9ec823\onnxruntime-0.0.14\src\environment.rs:234

which indicates that this assert failed:

fn g_ort() -> sys::OrtApi {
    let mut api_ref = G_ORT_API
        .lock()
        .expect("Failed to acquire lock: another thread panicked?");
    let api_ref_mut: &mut *mut sys::OrtApi = api_ref.get_mut();
    let api_ptr_mut: *mut sys::OrtApi = *api_ref_mut;

    assert_ne!(api_ptr_mut, std::ptr::null_mut());

    unsafe { *api_ptr_mut }
}

I have no experience with the onnx runtime lib so I was wondering if is that system related or if some dependencies are missing. Any help is appreciated.

Let build.rs download or build onnxruntime

The build.rs of onnxruntime-sys should be adapted to:

  1. Download a pre-built library;
  2. Compile the library;
  3. Use a version of the library somewhere on the system.

An environment variable should select which one to do.

This will allow CI runs by using a pre-built library.

thread 'main' panicked at 'called `Option::unwrap()` on a `None` value'

Rust version: 1.66.1 (90743e729 2023-01-10)
OS : win11

1

The model that I used is converted by tf2onnx. It can successfully load an ONNX model.

But after I run the session, an error message" 'main' panicked at 'called Option::unwrap() on a None value "
let input0_shape: Vec<usize> = session.inputs[0].dimensions().map(|d| d.unwrap()).collect();

version update to 1.13?

Hi there,

I noticed that the latest version of onnxruntime from microsoft is 1.13 . Is there any plan to update the rust bindings to use the same version?

Thanks

Training

Hi there,

I know that ort supports training. How does one go about doing that with the Rust version?

Thanks.

Model with dynamic dimensions

Hi, thanks for this library!

I've been trying to run a model with dynamic input dimensions, but it doesn't work due to a NonMatchingDimensions error.
model here.

Here's how I'd use the model from the onnxruntime Python bindings:

import onnxruntime # v1.2.0

session = onnxruntime.InferenceSession("model.onnx")
outputs = session.run(None, {"input_ids": [[1, 2, 3]], "attention_mask": [[1, 1, 1]]})[0]
print(outputs.shape)

Using your Rust bindings:

let env = Environment::builder().with_name("env").build()?;
let session = env
            .new_session_builder()?
            .with_optimization_level(GraphOptimizationLevel::Basic)?
            .with_model_from_file("model.onnx")?;

println!("{:#?}", session.inputs);
println!("{:#?}", session.outputs);

let input_ids = Array2::<f32>::from_shape_vec((1, 3), vec![1f32, 2f32, 3f32])?;
let attention_mask = Array2::<f32>::from_shape_vec((1, 3), vec![1f32, 1f32, 1f32])?;

let outputs: Vec<OrtOwnedTensor<f32, _>> = session.run(vec![input_ids, attention_mask])?;

This prints:

[
    Input {
        name: "input_ids",
        input_type: Int64,
        dimensions: [
            4294967295,
            4294967295,
        ],
    },
    Input {
        name: "attention_mask",
        input_type: Int64,
        dimensions: [
            4294967295,
            4294967295,
        ],
    },
]
[
    Output {
        name: "output",
        output_type: Float,
        dimensions: [
            4294967295,
            4294967295,
            94,
        ],
    },
]
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: NonMatchingDimensions { input: [1, 3], model: [4294967295, 4294967295] }'

Apparently the dynamic dimensions lead to an integer overflow (they are encoded as -1 in ONNX iirc).
I'm also a bit skeptical about the constraint on .run to have the same output type as input type - does that handle models with int64 input and float output correctly?

I appreciate any help!

Does onnxruntime-rs support dynamic demension?

Hello, I am new to rust, I'm trying to load my model which has dynamic dimension like:

image

the actual size is defined using batch_size and length:
image

When I use onnxruntime-rs to load the model, the input I got is

Inputs:
  0:
    name = input
    type = Int64
    dimensions = [None, None]
  1:
    name = past_1
    type = Float
    dimensions = [Some(2), None, Some(4), None, Some(96)]
  2:
    name = past_2
    type = Float
    dimensions = [Some(2), None, Some(4), None, Some(96)]
Outputs:
  0:
    name = output
    type = Float
    dimensions = [None, None, Some(40015)]
  1:
    name = out_past_1
    type = Float
    dimensions = [Some(2), None, Some(4), None, Some(96)]
  2:
    name = out_past_2
    type = Float
    dimensions = [Some(2), None, Some(4), None, Some(96)]

I cannot create a test input for my model because rust panics here:

let input0_shape: Vec<usize> = session.inputs[0].dimensions().map(|d| d.unwrap()).collect();

I also tried to convert an array to the input_shape using let array = Array::linspace(0.0_f32, 1.0, 2*2*1*4*1*96*2*1*4*1*96 as usize).into_shape(([2], [2, 1, 4, 1, 96], [2, 1, 4, 1, 96]));, but I got:

the trait bound `([{integer}; 1], [{integer}; 5], [{integer}; 5]): onnxruntime::ndarray::Dimension` is not satisfied

the trait `onnxruntime::ndarray::Dimension` is not implemented for `([{integer}; 1], [{integer}; 5], [{integer}; 5])`

note: required because of the requirements on the impl of `onnxruntime::ndarray::IntoDimension` for `([{integer}; 1], [{integer}; 5], [{integer}; 5])`rustc(E0277)
main.rs(77, 93): the trait `onnxruntime::ndarray::Dimension` is not implemented for `([{integer}; 1], [{integer}; 5], [{integer}; 5])`

The question is, how can I build a test input for this model?

Codecov upload now fails

New PRs CI runs fail at the upload of coverage results to codecov.io. See for example https://github.com/nbigaouette/onnxruntime-rs/runs/1685546236

Run codecov/[email protected]
  with:
  env:
    CARGO_TERM_COLOR: always
    RUST_LOG: onnxruntime=debug,onnxruntime-sys=debug
    RUST_BACKTRACE: 1
/usr/bin/docker run --name cc495642841ce371274c3e8d6215e2fcfa38be_761d31 --label cc4956 --workdir /github/workspace --rm -e CARGO_TERM_COLOR -e RUST_LOG -e RUST_BACKTRACE -e INPUT_TOKEN -e INPUT_NAME -e INPUT_FILE -e INPUT_FLAGS -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/onnxruntime-rs/onnxruntime-rs":"/github/workspace" cc4956:42841ce371274c3e8d6215e2fcfa38be  "" "" "" ""
Please provide an upload token from codecov.io with valid arguments

Might be due to outdated codecov/codecov-action action (version 1.0.2).

libloading support

Hi! I'm trying to use bindgen dynamic libloading for onnx shared library runtime.
I want to be able load and unload onnx shared library at runtime.
my current work is here chertov@b94870f

Can we make onnx shared library features check before loading?
For example we can check the existence of symbols like this

OrtSessionOptionsAppendExecutionProvider_CoreML
OrtSessionOptionsAppendExecutionProvider_CUDA
OrtSessionOptionsAppendExecutionProvider_Nnapi
etc..

in the ONNX runtime shared library and wrap this information with methods like "is_cuda_support()"

Dynamic output types

In practice I'm encountering models with different types on different outputs. As an example of the problem, a trivial TensorFlow model that takes string input and returns the unique elements of the input tensor produces this ONNX structure:

Inputs:
  0:
    name = input_1:0
    type = String
    dimensions = [None]
Outputs:
  0:
    name = Identity:0
    type = Int32
    dimensions = [None]
  1:
    name = Identity_1:0
    type = String
    dimensions = [None]

The two outputs are of different types, so the current type structure for retrieving output that assumes one type for all outputs won't work.

One way to go about it would be to add a trait like the equivalent on the input side:

pub trait `: Sized {
    /// The tensor element type that this type can extract from
    fn tensor_element_data_type() -> TensorElementDataType;

    /// Extract an `ArrayView` from the ort-owned tensor.
    fn extract_array<'a, D: ndarray::Dimension>(
        shape: D,
        tensor: *mut sys::OrtValue,
    ) -> Result<ndarray::ArrayView<'a, Self, D>>;
}

We could provide implementations of that trait for all the common types that map to ONNX types, as on the input side. In the String case the data would have to be copied, as far as I can tell. Output could be in the form of some new dynamic tensor type that exposes, for each output, the TensorElementDataType so that the user can then use an appropriate type with an OwnedTensorDataToType that matches.

u128 is not FFI-safe

The bindgen built bindings include a u128 for which cargo complains:

warning: `extern` block uses type `u128`, which is not FFI-safe
     --> target/debug/build/onnxruntime-sys-07a3265211652563/out/bindings.rs:10435:10
      |
10435 |     ) -> u128;
      |          ^^^^ not FFI-safe
      |
      = note: `#[warn(improper_ctypes)]` on by default
      = note: 128-bit integers don't currently have a known stable ABI

[Windows] Running on `x86_64-pc-windows-msvc` fails with `0xc000007b`

Hi, and thanks for this project!

I am unable to run or test this on Winwows with Rust 1.52, although building works fine:

➜ cargo +stable-x86_64-pc-windows-msvc run --example sample
    Blocking waiting for file lock on build directory
    Finished dev [unoptimized + debuginfo] target(s) in 6.03s
     Running `target\debug\examples\sample.exe`
error: process didn't exit successfully: `target\debug\examples\sample.exe` (exit code: 0xc000007b)
➜ cargo --version
cargo 1.52.0 (69767412a 2021-04-21)

This doesn't seem to be related to #43.

How to run inference with input of different dtypes?

Hello, I'm trying to run a model which has the following input:

println!("input format: {:?}", &session.inputs);

input format: [Input { name: "input_ids", input_type: Int64, dimensions: [None, None] }, Input { name: "past_0", input_type: Float, dimensions: [Some(2), None, Some(6), None, Some(64)] }]

I tried to build two arrays, with i64 and f32 dtype, but I cannot put them in a vector which can be fed to session.run

let input_array = ArrayD::<i64>::from_shape_vec(IxDyn(&[1, context_len]), ids).unwrap();
let past1_array = ArrayD::<f32>::from_shape_vec(IxDyn(&[2, 1, 6, 0, 64]), vec![]).unwrap();
session.run(vec![input_array, past1_array];

it cannot compile:

session.run(vec![input_array, past1_array];
                              ^^^^^^^^^^^ expected `i64`, found `f32`

Anyone has idea about it? Thanks

Error while reading sklearn-onnx model

I am testing out the onnxruntime library. I have a very simple model built in Python using sklearn and I save it in the onnx format using skl2onnx library.

python code:

from sklearn.model_selection import train_test_split
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
from skl2onnx.common.data_types import FloatTensorType
from skl2onnx import convert_sklearn, get_model_alias

digits = load_digits()
x_train, x_test, y_train, y_test = train_test_split(
    digits.data, digits.target, test_size=0.2, random_state=2)
clf = LogisticRegression()
initial_type = [('input', FloatTensorType([None, 64]))]

onx = convert_sklearn(clf, initial_types=initial_type, target_opset=11)
with open("simple_model.onnx", "wb") as f:
    f.write(onx.SerializeToString()

and I read the model in rust

    let environment = environment::Environment::builder()
        .with_name("trial")
        .with_log_level(onnxruntime::LoggingLevel::Verbose)
        .build()?;

    let _session = environment
        .new_session_builder()?
        .with_optimization_level(onnxruntime::GraphOptimizationLevel::Basic)?
        .with_model_from_file("simple_model.onnx")?

I get the following error while building the session in rust:

thread 'main' panicked at 'assertion failed: (left != right)left:0x0, right: 0x0', /home/vikram/.cargo/registry/src/github.com-1ecc6299db9ec823/onnxruntime-0.0.12/src/session.rs:690:9 note: run with RUST_BACKTRACE=1environment variable to display a backtrace

What am I doing wrong? Any help is greatly appreciated!

Session is not tied by lifetime to the environment => easy segfaults

The crate is meant to be safe, but here's one super easy way to segfault it.

Instead of writing

let environment = Environment::builder()
    .build()?;
let mut session = environment
    .new_session_builder()?
    .with_model_from_file("squeezenet1.0-8.onnx")?;
// session.run()

write this

let mut session = Environment::builder()
    .build()?
    .new_session_builder()?
    .with_model_from_file("squeezenet1.0-8.onnx")?;
// session.run() <--- segfault

It's not immediately unclear that you're doing something evil here in the safe code, and it compiles just fine - and then segfaults.

Fix memory leaks and double-frees in example

To get something working quick, the example is not careful about memory leaks and double-frees.

Memory leaks happen, for example, a CString or Vec is constructed but then converted to a raw pointer using into_raw(). To properly free, one has to re-convert in the Rust type and let its Drop frees the memory (don't let C call free()). See https://doc.rust-lang.org/std/ffi/struct.CString.html#method.into_raw for reference.

Double-frees happen when either C's free() is called on a dropped object or when drop is called on a freed object.

❯ cargo run --example c_api_sample
[...]
Score for class [0] =  0.000045440644
Score for class [1] =  0.0038458651
Score for class [2] =  0.00012494653
Score for class [3] =  0.0011804523
Score for class [4] =  0.0013169361
Done!
c_api_sample(28315,0x10e194dc0) malloc: *** error for object 0x7fafb10b8000: pointer being freed was not allocated
c_api_sample(28315,0x10e194dc0) malloc: *** set a breakpoint in malloc_error_break to debug

976cac7

When this is fixed, the full API requirement should be visible.

how to separate the creation and running of a session

I'm curious about how to separate the creation and running of a session so that I can create a session in a function and then run it multiple times in another function to avoid repeating session creation like this:

use onnxruntime::{environment::Environment, session::Session, GraphOptimizationLevel, ndarray::{IxDyn, Array2}, tensor::OrtOwnedTensor};

pub struct Net<'a> {
    sess: Session<'a>,
}

type Error = Box<dyn std::error::Error>;

impl<'a> Net<'a> {
    pub fn new() -> Result<Net<'a>, Error> {
        let environment = Environment::builder().build()?;

        let session: Session<'a> = environment
            .new_session_builder()?
            .with_optimization_level(GraphOptimizationLevel::Basic)?
            .with_number_threads(8)?
            .with_model_from_file("model.onnx")?;

        Ok(Net {
            sess: session
        })
    }

    pub fn run(&mut self, array: Vec<Array2<f32>>) -> Vec<OrtOwnedTensor<f32, IxDyn>> {
        self.sess.run(array).unwrap()
    }
}
error[E0597]: `environment` does not live long enough
  --> crates/dmir_nn/src/beats.rs:13:36
   |
9  | impl<'a> Net<'a> {
   |      -- lifetime `'a` defined here
...
13 |         let session: Session<'a> = environment
   |                      -----------   ^^^^^^^^^^^ borrowed value does not live long enough
   |                      |
   |                      type annotation requires that `environment` is borrowed for `'a`
...
22 |     }
   |     - `environment` dropped here while still borrowed

I have tried to save the environment in Net too, but it also causes a self-referential problem.

Is there any way to solve such a problem?

CUDA support

I am interested in getting onnxruntime-rs running with CUDA based inference. (I'm also interested in getting AMDMIGraphX inference working but that's a whole nother can of worms)

Anyway in onnxruntime-rs/onnxruntime-sys/examples/c_api_sample.rs there is:

c_api_sample.rs:52:    // E.g. for CUDA include cuda_provider_factory.h and uncomment the following line:
c_api_sample.rs:53:    // OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0);

But uncommenting the line doesn't work since symbols OrtSessionOptionsAppendExecutionProvider_CUDA and sessionOptions are not available.

Also generally the CUDA doesn't seem to be working since it is still using CPU for inferencing even though I compiled with

ORT_LIB_LOCATION /usr/local/
ORT_STRATEGY system
ORT_USE_CUDA 1

with onnxruntime compiled with ./build.sh --use_cuda --cudnn_home /usr/ --cuda_home /opt/cuda/ --config RelWithDebInfo --parallel --build_shared_lib and installed in /usr/local.

RUSTSEC-2020-0071: Potential segfault in the time crate

Potential segfault in the time crate

Details
Package time
Version 0.1.44
URL time-rs/time#293
Date 2020-11-18
Patched versions >=0.2.23
Unaffected versions =0.2.0,=0.2.1,=0.2.2,=0.2.3,=0.2.4,=0.2.5,=0.2.6

Impact

Unix-like operating systems may segfault due to dereferencing a dangling pointer in specific circumstances. This requires an environment variable to be set in a different thread than the affected functions. This may occur without the user's knowledge, notably in a third-party library.

The affected functions from time 0.2.7 through 0.2.22 are:

  • time::UtcOffset::local_offset_at
  • time::UtcOffset::try_local_offset_at
  • time::UtcOffset::current_local_offset
  • time::UtcOffset::try_current_local_offset
  • time::OffsetDateTime::now_local
  • time::OffsetDateTime::try_now_local

The affected functions in time 0.1 (all versions) are:

  • at
  • at_utc

Non-Unix targets (including Windows and wasm) are unaffected.

Patches

Pending a proper fix, the internal method that determines the local offset has been modified to always return None on the affected operating systems. This has the effect of returning an Err on the try_* methods and UTC on the non-try_* methods.

Users and library authors with time in their dependency tree should perform cargo update, which will pull in the updated, unaffected code.

Users of time 0.1 do not have a patch and should upgrade to an unaffected version: time 0.2.23 or greater or the 0.3. series.

Workarounds

No workarounds are known.

References

time-rs/time#293

See advisory page for additional details.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.