Giter Site home page Giter Site logo

rust's Introduction

SIG Rust TensorFlow

Version Build status

TensorFlow Rust provides idiomatic Rust language bindings for TensorFlow.

Notice: This project is still under active development and not guaranteed to have a stable API.

Getting Started

Since this crate depends on the TensorFlow C API, it needs to be downloaded or compiled first. This crate will automatically download or compile the TensorFlow shared libraries for you, but it is also possible to manually install TensorFlow and the crate will pick it up accordingly.

Prerequisites

If the TensorFlow shared libraries can already be found on your system, they will be used. If your system is x86-64 Linux or Mac, a prebuilt binary will be downloaded, and no special prerequisites are needed.

Otherwise, the following dependencies are needed to compile and build this crate, which involves compiling TensorFlow itself:

  • git
  • bazel
  • Python Dependencies numpy, dev, pip and wheel
  • Optionally, CUDA packages to support GPU-based processing

The TensorFlow website provides detailed instructions on how to obtain and install said dependencies, so if you are unsure please check out the docs for further details.

Some of the examples use TensorFlow code written in Python and require a full TensorFlow installation.

The minimum supported Rust version is 1.58.

Usage

Add this to your Cargo.toml:

[dependencies]
tensorflow = "0.21.0"

and this to your crate root:

extern crate tensorflow;

Then run cargo build -j 1. The tensorflow-sys crate's build.rs now either downloads a pre-built, basic CPU only binary (the default) or compiles TensorFlow if forced to by an environment variable. If TensorFlow is compiled during this process, since the full compilation is very memory intensive, we recommend using the -j 1 flag which tells cargo to use only one task, which in turn tells TensorFlow to build with only one task. Though, if you have a lot of RAM, you can obviously use a higher value.

To include the especially unstable API (which is currently the expr module), use --features tensorflow_unstable.

For now, please see the Examples for more details on how to use this binding.

Tensor Max Display

When printing or debugging a tensor, it will print every element by default, this can be modified by changing an environment variable:

TF_RUST_DISPLAY_MAX=5

Which will truncate the values if they exceed the limit:

let values: Vec<u64> = (0..100000).collect();
let t = Tensor::new(&[2, 50000]).with_values(&values).unwrap();
dbg!(t);
t = Tensor<u64> {
    values: [
        [0, 1, 2, 3, 4, ...],
        ...
    ],
    dtype: uint64,
    shape: [2, 50000]
}

GPU Support

To enable GPU support, use the tensorflow_gpu feature in your Cargo.toml:

[dependencies]
tensorflow = { version = "0.21.0", features = ["tensorflow_gpu"] }

Manual TensorFlow Compilation

If you want to work against unreleased/unsupported TensorFlow versions or use a build optimized for your machine, manual compilation is the way to go.

See tensorflow-sys/README.md for details.

FAQ's

Why does the compiler say that parts of the API don't exist?

The especially unstable parts of the API (which is currently the expr module) are feature-gated behind the feature tensorflow_unstable to prevent accidental use. See http://doc.crates.io/manifest.html#the-features-section. (We would prefer using an #[unstable] attribute, but that doesn't exist yet.)

How do I...?

Try the documentation first, and see if it answers your question. If not, take a look at the examples folder. Note that there may not be an example for your exact question, but it may be answered by an example demonstrating something else.

If none of the above help, you can ask your question on TensorFlow Rust Google Group.

Contributing

Developers and users are welcome to join the TensorFlow Rust Google Group.

Please read the contribution guidelines on how to contribute code.

This is not an official Google product.

RFCs are issues tagged with RFC. Check them out and comment. Discussions are welcomed. After all, that is the purpose of Request For Comment!

License

This project is licensed under the terms of the Apache 2.0 license.

rust's People

Contributors

ad-si avatar adamcrume avatar afloren avatar alanyee avatar bekker avatar brianjjones avatar danieldk avatar daschl avatar dskkato avatar ehsanmok avatar enet4 avatar functor avatar iduartgomez avatar ivanukhov avatar jackos avatar kykosic avatar lmeerwood avatar masonk avatar mihaimaruseac avatar n-mca avatar nbigaouette-eai avatar ramon-garcia avatar regiontog avatar rschulman avatar sanmai-nl avatar siegelordex avatar siyavash avatar tako8ki avatar wisagan avatar xd009642 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rust's Issues

Compile tensorflow on OSX with SSE/AVX/FMA ?

Hi,

since the 1.0 upgrade cargo test now spits out the following on my MBP:

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.

I guess we could be more intelligent when compiling tensorflow to try and take advantage of it automatically?

Graph extension on `SessionWithGraph`?

Hi,

when switching over from the deprecated Session, one thing I'm missing is the extend_graph property. I have the feeling this should now be possible on the Graph struct/impl itself, but I checked the public API as well as the FFI one and couldn't find out how to do it.

Is there something I've missed or is it just not implemented yet?

Thanks!

Running -sys crate's test fails on nightly

Running tests for the tensorflow-sys crate fails on nightly, even though it works on stable.

This is using the pre-built binary, so libtensorflow.so is located in tensorflow-sys/target/libtensorflow-cpu-linux-x86_64-1.0.0/lib/libtensorflow.so.

(~/tensorflow_rust.git/tensorflow-sys)
 -> cargo +nightly test -vv
       Fresh lazy_static v0.2.4
       Fresh gcc v0.3.43
       Fresh openssl-probe v0.1.0
       Fresh regex-syntax v0.3.9
       Fresh libc v0.2.21
       Fresh filetime v0.1.10
       Fresh memchr v0.1.11
       Fresh aho-corasick v0.5.3
       Fresh utf8-ranges v0.1.3
       Fresh winapi-build v0.1.1
       Fresh pkg-config v0.3.9
       Fresh winapi v0.2.8
       Fresh kernel32-sys v0.2.2
       Fresh thread-id v2.0.0
       Fresh thread_local v0.2.7
       Fresh regex v0.1.80
       Fresh xattr v0.1.11
       Fresh tar v0.4.10
       Fresh miniz-sys v0.1.9
       Fresh flate2 v0.2.17
       Fresh libz-sys v1.0.13
       Fresh openssl-sys v0.9.8
       Fresh curl-sys v0.3.10
       Fresh semver-parser v0.6.2
       Fresh semver v0.5.1
       Fresh curl v0.4.6
       Fresh tensorflow-sys v0.7.0 (file:///home/nbigaouette/tensorflow_rust.git/tensorflow-sys)
    Finished dev [unoptimized + debuginfo] target(s) in 0.1 secs
     Running `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/lib-b2190e536168b674`
/home/nbigaouette/tensorflow_rust.git/target/debug/deps/lib-b2190e536168b674: error while loading shared libraries: libtensorflow.so: cannot open shared object file: No such file or directory
error: test failed

Running the test passes using the stable 1.16:

(~/tensorflow_rust.git/tensorflow-sys)
 -> cargo test -vv
       Fresh gcc v0.3.43
       Fresh regex-syntax v0.3.9
       Fresh winapi-build v0.1.1
       Fresh utf8-ranges v0.1.3
       Fresh pkg-config v0.3.9
       Fresh libc v0.2.21
       Fresh openssl-probe v0.1.0
       Fresh xattr v0.1.11
       Fresh miniz-sys v0.1.9
       Fresh filetime v0.1.10
       Fresh memchr v0.1.11
       Fresh tar v0.4.10
       Fresh libz-sys v1.0.13
       Fresh flate2 v0.2.17
       Fresh winapi v0.2.8
       Fresh lazy_static v0.2.4
       Fresh kernel32-sys v0.2.2
       Fresh aho-corasick v0.5.3
       Fresh thread-id v2.0.0
       Fresh openssl-sys v0.9.8
       Fresh thread_local v0.2.7
       Fresh regex v0.1.80
       Fresh semver-parser v0.6.2
       Fresh semver v0.5.1
       Fresh curl-sys v0.3.10
       Fresh curl v0.4.6
       Fresh tensorflow-sys v0.7.0 (file:///home/nbigaouette/tensorflow_rust.git/tensorflow-sys)
    Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
     Running `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/lib-b2190e536168b674`

running 1 test
test linkage ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

     Running `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/tensorflow_sys-dde201bfdb2eb731`

running 4 tests
test bindgen_test_layout_TF_Buffer ... ok
test bindgen_test_layout_TF_AttrMetadata ... ok
test bindgen_test_layout_TF_Output ... ok
test bindgen_test_layout_TF_Input ... ok

test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured

   Doc-tests tensorflow-sys
     Running `rustdoc --test /home/nbigaouette/tensorflow_rust.git/tensorflow-sys/src/lib.rs --crate-name tensorflow_sys -L dependency=/home/nbigaouette/tensorflow_rust.git/target/
debug/deps -L native=/home/nbigaouette/tensorflow_rust.git/target/debug/build/miniz-sys-18005000ddedadf4/out -L native=/home/nbigaouette/tensorflow_rust.git/tensorflow-sys/target/l
ibtensorflow-cpu-linux-x86_64-1.0.0/lib -L native=/usr/lib --extern libc=/home/nbigaouette/tensorflow_rust.git/target/debug/deps/liblibc-5dc7b85e748840b4.rlib --extern tensorflow_sys=/home/nbigaou
ette/tensorflow_rust.git/target/debug/deps/libtensorflow_sys-abb7df69fd313205.rlib`

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured

Could this be related to rust-lang/cargo#2478 ? In that case using cargo:rustc-link-search=native={} instead of cargo:rustc-link-search={} should fix this. Unfortunately, it doesn't.

I'm not sure how to fix this...

This is on master (8bd62df).

feature request: add convenience method for saving session variables

I havent been able to find an approved library method for saving session variables.

While this isnt strictly required to use this package, convenience classes like the tensorflow's python tf.train.Saver are useful because anyone using this package to mutate session state is likely going to need a way to record it.

Set up an automated valgrind test

Not usually necessary for Rust, but since we're doing a lot of unsafe stuff with pointers, we should have a valgrind test. Ideally, Travis would run it and fail the build if there are issues.

Installation help?

This is perhaps one of those questions where I'm going to smack my for head at the answer, but..
I've built tensorflow/rust on my Linux computer and I now have a .rlib file, along with, presumably, the tensorflow .so file, but that's where the installation instructions stop. What are the next steps for getting those files where they need to be to use them in my own project? How do I tell cargo to use it?

Thanks so much!

Consider moving to bindgen for tensorflow-sys

Hi,

I've used bindgen with great success for other c-bindings I maintain (couchbase-sys) and I was wondering if it could make sense to migrate tensorflow-sys to it as well (https://crates.io/crates/bindgen). It would allow us with very little effort to pull in the c headers and just generate the low level binding, and stay up to date.

Here is basically all it takes to generate the API from a header file: https://github.com/couchbaselabs/couchbase-rs/blob/master/couchbase-sys/build.rs#L31

If that sounds like a good path to investigate I could take it on and see if I can make it work.

Support Strings as tensor elements

Hi,

here is my simple python-based model:

import tensorflow as tf

hello = tf.constant('Hello, World!', name = 'hello')

tf.initialize_variables(tf.all_variables(), name = 'init')
definition = tf.Session().graph_def
tf.train.write_graph(definition, 'models', 'hello_world.pb', as_text=False)

I've been trying to print Hello, World from rust, but I think I saw that the result for String is not yet implemented (https://github.com/google/tensorflow-rust/blob/master/src/lib.rs#L754):

extern crate tensorflow;

use tensorflow::{Session, SessionOptions, Step, DataType};
use std::fs::File;
use std::io::Read;


fn main() {
    let filename = "examples/models/hello_world.pb";

    // Initialize Session and load the model.
    let mut session = Session::new(&SessionOptions::new()).unwrap();
    let mut model = Vec::new();
    File::open(filename).unwrap().read_to_end(&mut model);
    session.extend_graph(&model);

    let mut output_step = Step::new();
    let output = output_step.request_output("hello").unwrap();
    session.run(&mut output_step);

    // TODO: String doesn't work yet.
    let result: String = output_step.take_output(output).unwrap().data()[0];
    println!("{:?}", result);
}

Is there any way to make it work right now for strings with a workaround? If not, looking forward to when its possible for strings :)

Thanks!

RFC: RFC process

Overview

We need a way to discuss design decisions, meta-issues, and other things that people are likely to have differing opinions on. RFCs are not for feature requests or bikeshedding. They're for situations where a difference of opinion exists on major issues.

Process

  1. Someone opens a GitHub issue with a title starting with "Proposed RFC:". The proposal, any alternatives, and arguments for/against are laid out.
  2. If it's worth discussing, I'll tag it with RFC and remove the word "Proposed". A date for the end of the comment period is given.
  3. The RFC stays open until the end of the comment period, although the date is not fixed and may be extended if there is good reason. Commenters may propose additional suggestions, in addition to providing arguments for/against existing suggestions.
  4. I'll comment on the issue with a decision and close the issue.

Alternatives

No RFC process

I'd like to make sure everyone's voice is heard, so there should probably be some sort of process. Having a process also means that the rationale for decisions can be looked up in the future, and it's clear whether a decision has been made or not.

RFCs in the wiki

That might be too unstructured. Anyone could edit anything, rather than using a nice comment system. I wouldn't have a way to tag things as RFCs that other people couldn't also do.

Discussion on IRC

Simple and informal, but not everyone uses IRC. Not everyone is online all the time, either. It also doesn't keep discussions about separate RFCs separate. It's also not possible to do things like search for open RFCs.

A more formal process, like the one for the Rust language

We probably don't need that much overhead.

Questions

  • What should the default comment period be? It depends on the issue, of course, but a good default might be two weeks.
  • Should we bother with RFC numbering? How should they be numbered? We could simply use the issue number, if we use the issue tracker for RFCs. The numbers won't be contiguous, but maybe that doesn't matter.

Meta

This will be closed on July 30, 2016 unless extended.

No C API in TensorFlow v0.10

I might be misunderstanding what’s happening, but it seems that one won’t be able to upgrade TensorFlow to version 0.10 as the C API somehow has become unusable and, hence, will not be easily accessible in the upcoming release. I’ve tried to play a bit with the new API functions when they were still being compiled and included in libtensorflow.so, and it seemed to be working nicely. Perhaps they think that this functionality is still raw, and that’s why they don’t want to expose it just yet. The bottom line is that the crate won’t have the graph-construction functionality in the nearest future. I’m wondering if you have more information on this. Thanks!

Regards,
Ivan

Set up mailing list/discussion forum

The IRC channel gets so little traffic that when someone pops in to ask a question, there's often no immediate answer, and they quickly give up and leave. A mailing list would probably help.

Update docs for new renderer

Compiling yields the warning:

WARNING: documentation for this crate may be rendered differently using the new Pulldown renderer.
    See https://github.com/rust-lang/rust/issues/44229 for details.

We need to update the docs to fix any rendering issues.

libc::memalign not supported on OSX

On OSX libc doesn't support memalign. It does however support posix_memalign but changing the code to use that instead results in a segfault. I probably just don't know how to use it.

Rust kernels

Are there any plans to make it possible to create kernels in rust at least with a limited functionality?

TF_AllocateTensor in tensorflow 0.11

It might be possible to let tensorflow manage tensor memory directly with the newly added TF_AllocateTensor, instead of handling this with the buffer struct.

Add primitive From trait impls to Tensor?

Hi,

would it make sense to add From trait impls for primitives so that you could do Tensor::from(i32) to create a 1-dimensional tensor with one element? I could imagine this comes in handy when you need to do simple math operations or other tasks where you need to fill in simple placeholders.

Could not find `expr` in `tensorflow`

Hi!
I'm trying to run an example from examples/expressions.rs. At start I get the following error

error[E0432]: unresolved import tensorflow::expr::Compiler
--> src/main.rs:8:24
|
8 | use tensorflow::expr::{Compiler, Placeholder};
| ^^^^^^^^ Could not find expr in tensorflow

error[E0432]: unresolved import tensorflow::expr::Placeholder
--> src/main.rs:8:34
|
8 | use tensorflow::expr::{Compiler, Placeholder};
| ^^^^^^^^^^^ Could not find expr in tensorflow

My Cargo.toml

[package]  
name = "tflow_test"
version = "0.1.0"
authors = ["Zhiltsov Dmitriy <[email protected]>"]
[dependencies]
# tensorflow = { git = "https://github.com/tensorflow/rust" }
tensorflow = {version = "0.4.0"}
random = "*"
[features]
tensorflow_unstable = []

I see that the module expr is declared as [public] (https://github.com/tensorflow/rust/blob/master/src/lib.rs#L166)
Can you help with this problem?

Can't build on bash windows

Here is the output

$ cargo build -j 1
    Updating registry `https://github.com/rust-lang/crates.io-index`
   Compiling libc v0.2.17
   Compiling pkg-config v0.3.8
   Compiling tensorflow-sys v0.5.0 (file:///mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys)
error: failed to run custom build command for `tensorflow-sys v0.5.0 (file:///mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys)`
process didn't exit successfully: `/mnt/c/Users/tyoc/test/nada/rust/target/debug/build/tensorflow-sys-5a768d9e18f318a5/build-script-build` (exit code: 101)
--- stdout
libtensorflow-sys/build.rs:31: output = "/mnt/c/Users/tyoc/test/nada/rust/target/debug/build/tensorflow-sys-5a768d9e18f318a5/out"
libtensorflow-sys/build.rs:33: source = "/mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/target/source-0.10.0"
libtensorflow-sys/build.rs:35: lib_dir = "/mnt/c/Users/tyoc/test/nada/rust/target/debug/build/tensorflow-sys-5a768d9e18f318a5/out/lib-0.10.0"
libtensorflow-sys/build.rs:39: Creating directory "/mnt/c/Users/tyoc/test/nada/rust/target/debug/build/tensorflow-sys-5a768d9e18f318a5/out/lib-0.10.0"
libtensorflow-sys/build.rs:43: library_path = "/mnt/c/Users/tyoc/test/nada/rust/target/debug/build/tensorflow-sys-5a768d9e18f318a5/out/lib-0.10.0/libtensorflow_c.so"
libtensorflow-sys/build.rs:48: target_path = "tensorflow/libtensorflow_c.so"
libtensorflow-sys/build.rs:102: Executing "git" "clone" "--branch=v0.10.0" "--recursive" "https://github.com/tensorflow/tensorflow.git" "/mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/target/source-0.10.0"
libtensorflow-sys/build.rs:106: Command "git" "clone" "--branch=v0.10.0" "--recursive" "https://github.com/tensorflow/tensorflow.git" "/mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/target/source-0.10.0" finished successfully
libtensorflow-sys/build.rs:77: Checking build file "/mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/target/source-0.10.0/tensorflow/BUILD"
libtensorflow-sys/build.rs:84: Patching build file "/mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/target/source-0.10.0/tensorflow/BUILD"
libtensorflow-sys/build.rs:102: Executing "./configure"
No Google Cloud Platform support will be enabled for TensorFlow
No GPU support will be enabled for TensorFlow
Configuration finished
libtensorflow-sys/build.rs:106: Command "./configure" finished successfully
libtensorflow-sys/build.rs:102: Executing "bazel" "build" "--jobs=1" "--compilation_mode=opt" "tensorflow:libtensorflow_c.so"

--- stderr
Clonar en «/mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/target/source-0.10.0»...
Note: checking out 'c715c3102df1556fc0ce88fc987440a3c80e5380'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

Checking out files: 100% (4084/4084), done.
Can't do inplace edit on tensorflow/core/platform/default/build_config.bzl: No existe el archivo o el directorio.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 2, message: "No such file or directory" } }', ../src/libcore/result.rs:788
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Here is the backtrace (even that is pretty clear)

--- stderr
Can't open tensorflow/core/platform/default/build_config.bzl: No existe el archivo o el directorio.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 2, message: "No such file or directory" } }', ../src/libcore/result.rs:788
stack backtrace:
   1:     0x7fe71b839739 - std::sys::backtrace::tracing::imp::write::hd4b54a4a2078cb15
   2:     0x7fe71b8403ec - std::panicking::default_hook::_{{closure}}::h51a5ee7ba6a9fcef
   3:     0x7fe71b83f6e9 - std::panicking::default_hook::hf823fce261e27590
   4:     0x7fe71b83fd28 - std::panicking::rust_panic_with_hook::h8d486474663979b9
   5:     0x7fe71b83fb82 - std::panicking::begin_panic::h72862f004a4942ab
   6:     0x7fe71b83faf0 - std::panicking::begin_panic_fmt::hdc424a357d9142e1
   7:     0x7fe71b83fa71 - rust_begin_unwind
   8:     0x7fe71b8762cf - core::panicking::panic_fmt::h6b06f78ae7f9dd57
   9:     0x7fe71b80fb9c - core::result::unwrap_failed::hca88b4a09ab2a5f1
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/obj/../src/libcore/result.rs:29
  10:     0x7fe71b80c005 - _<core..result..Result<T, E>>::unwrap::h59f218bad74f813b
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/obj/../src/libcore/result.rs:726
  11:     0x7fe71b816545 - build_script_build::run::h1436e6466b486231
                        at /mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/build.rs:17
  12:     0x7fe71b815249 - build_script_build::main::h84db52058401dfc4
                        at /mnt/c/Users/tyoc/test/nada/rust/tensorflow-sys/build.rs:58
  13:     0x7fe71b847ed6 - __rust_maybe_catch_panic
  14:     0x7fe71b83ee62 - std::rt::lang_start::hca48e539ce72a288
  15:     0x7fe71b817603 - main
  16:     0x7fe71aa01f44 - __libc_start_main
  17:     0x7fe71b8072d8 - <unknown>
  18:                0x0 - <unknown>

Also would prefer the name of the repo be some like tensorflow-rust so when you clone with guthub app doesnt overwrite rust repo :).

Tensorflow build broken for Tensorflow r0.12 with Bazel 0.4.4.

The head release of Rust-Tensorflow links to a release r0.12 of TensorFlow which fails to build with Bazel 0.4.4. This results in Rust-TensorFlow crate build failing. The TensorFlow build tries to fetch broken android models during ./configure step. This configure fails and the whole TensorFlow build fails.

See explanation here:
bazelbuild/bazel#2478

The same problem can be reproduced by just checking out release r0.12 of TensorFlow and trying to build it using Bazel 0.4.4.

Linking to a newer release of TensorFlow is likely to solve the problem as manually building TensorFlow r1.0.0 didn't have the same issue.

Crate build output (tail):

____Loading package: @inception5h//
ERROR: /home/tomi/.cache/bazel/_bazel_tomi/513f25fe4a2103b04ca570fb64b12a11/external/bazel_tools/src/tools/android/java/com/google/devtools/build/android/dexer/BUILD:3:1: no such target '//external:android/dx_jar_import': target 'android/dx_jar_import' not declared in package 'external' defined by /home/tomi/projects/rust/rust-tensorflow/tensorflow-sys/target/source-v0.12.0/WORKSPACE and referenced by '@bazel_tools//src/tools/android/java/com/google/devtools/build/android/dexer:dexer'.
ERROR: Evaluation of query "deps(//tensorflow/...)" failed: errors were encountered while computing transitive closure.
thread 'main' panicked at 'failed to execute "bash" "-c" "yes \'\'|./configure"', tensorflow-sys/build.rs:105
stack backtrace:
   1:     0x55d015f5129a - std::sys::imp::backtrace::tracing::imp::write::h3188f035833a2635
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:42
   2:     0x55d015f56cef - std::panicking::default_hook::{{closure}}::h6385b6959a2dd25b
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:349
   3:     0x55d015f568ee - std::panicking::default_hook::he4f3b61755d7fa95
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:365
   4:     0x55d015f57137 - std::panicking::rust_panic_with_hook::hf00b8130f73095ec
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:553
   5:     0x55d015f56f74 - std::panicking::begin_panic::h6227f62cb2cdaeb4
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:515
   6:     0x55d015f56ee9 - std::panicking::begin_panic_fmt::h173eadd80ae64bec
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:499
   7:     0x55d015df4e9e - build_script_build::run::ha59c337a62190002
                        at /home/tomi/projects/rust/rust-tensorflow/tensorflow-sys/build.rs:105
   8:     0x55d015df36b0 - build_script_build::main::h87abe698382f23c6
                        at /home/tomi/projects/rust/rust-tensorflow/tensorflow-sys/build.rs:78
   9:     0x55d015f5e04a - __rust_maybe_catch_panic
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libpanic_unwind/lib.rs:98
  10:     0x55d015f57876 - std::rt::lang_start::h65647f6e36cffdae
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panicking.rs:434
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/panic.rs:351
                        at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/src/libstd/rt.rs:57
  11:     0x55d015df6aa2 - main
  12:     0x7f52f9b8a3f0 - __libc_start_main
  13:     0x55d015de3689 - _start
  14:                0x0 - <unknown>

How to deal with negative shape dimensions

I am trying to use a model generated from keras.
I have converted the .h5 to .pb graph using this script and I am able to load it in rust.
But whenever I try to feed it I get

InvalidArgument: You must feed a value for placeholder tensor 'conv2d_13_input' with dtype float and shape [?,8,8,1]

I try to feed a Tensor created with

let mut t: Tensor<f32> = Tensor::new(&[1, 8, 8, 1]);

I guess the problem come from the model definition which starts with

node {
  name: "conv2d_13_input"
  op: "Placeholder"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "shape"
    value {
      shape {
        dim {
          size: -1
        }
        dim {
          size: 8
        }
        dim {
          size: 8
        }
        dim {
          size: 1
        }
      }
    }
  }
}

It seams that the C++ TensorShape accepts i64 but the constructor of Rust has u64.

Is it a limitation of the Rust binding or is there a workaround ?

Thanks,
Maxime.

Use rustfmt

Everything should be formatted with rustfmt. Obviously, this will introduce a large diff.

Add Graph::to_function

I've been having trouble implementing it because of deadlock happening inside the TensorFlow C++ library. It's some sort of Heisenbug that disappears when I try to run it through Valgrind (to make sure I'm not passing uninitialized memory to the C API), and gdb isn't helpful because I can't seem to build the C API with proper debug info.

Possibility to Save/Restore Checkpoints

Hi,

one thing I need to do is train a model, either one time or online, and then at regular intervals store the model so when the server goes down I can query my trained model again and I don't need to hold everything in memory after training.

I don't think this is exposed in rust yet, right? Does the C-API already support it?

Tests for tensorflow-sys fail

Running cargo test in the tensorflow-sys directory fails (but tests pass for the main crate).

Here's the output:

 -> cd ~/tensorflow_rust.git/tensorflow-sys
 -> cargo test
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
     Running target/debug/deps/tensorflow-9f5d1dac7e952430

running 12 tests
test buffer::tests::basic ... ok
test session::tests::smoke ... ok
test session::tests::test_close ... ok
test graph::tests::smoke ... ok
test tests::smoke ... ok
test tests::test_close ... ok
test tests::test_extend_graph ... ok
test session::tests::test_run ... ok
test tests::test_set_config ... ok
test tests::test_set_target ... ok
test tests::test_tensor ... ok
test tests::test_run ... ok

test result: ok. 12 passed; 0 failed; 0 ignored; 0 measured

   Doc-tests tensorflow

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured

 -> cd tensorflow-sys/
 -> cargo test --verbose
       Fresh lazy_static v0.2.4
       Fresh regex-syntax v0.3.9
       Fresh pkg-config v0.3.9
       Fresh utf8-ranges v0.1.3
       Fresh libc v0.2.21
       Fresh winapi v0.2.8
       Fresh winapi-build v0.1.1
       Fresh memchr v0.1.11
       Fresh aho-corasick v0.5.3
       Fresh kernel32-sys v0.2.2
       Fresh thread-id v2.0.0
       Fresh thread_local v0.2.7
       Fresh regex v0.1.80
       Fresh semver-parser v0.6.2
       Fresh semver v0.5.1
       Fresh tensorflow-sys v0.7.0 (file:///home/nbigaouette/tensorflow_rust.git/tensorflow-sys)
    Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
     Running `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/lib-a517463cab98ea9f`

running 1 test
test linkage ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

     Running `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/tensorflow_sys-b18d6c19e08d67bd`
terminate called without an active exception
error: process didn't exit successfully: `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/tensorflow_sys-b18d6c19e08d67bd` (signal: 6, SIGABRT: process abort signal)

Caused by:
  process didn't exit successfully: `/home/nbigaouette/tensorflow_rust.git/target/debug/deps/tensorflow_sys-b18d6c19e08d67bd` (signal: 6, SIGABRT: process abort signal)

Here's a backtrace from running /home/nbigaouette/tensorflow_rust.git/target/debug/deps/tensorflow_sys-b18d6c19e08d67bd through gdb:

Program received signal SIGABRT, Aborted.
0x00007ffff32f904f in raise () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007ffff32f904f in raise () from /usr/lib/libc.so.6
#1  0x00007ffff32fa47a in abort () from /usr/lib/libc.so.6
#2  0x00007ffff2ccb4ed in __gnu_cxx::__verbose_terminate_handler () at /build/gcc/src/gcc/libstdc++-v3/libsupc++/vterminate.cc:95
#3  0x00007ffff2cc92a6 in __cxxabiv1::__terminate (handler=<optimized out>) at /build/gcc/src/gcc/libstdc++-v3/libsupc++/eh_terminate.cc:47
#4  0x00007ffff2cc92f1 in std::terminate () at /build/gcc/src/gcc/libstdc++-v3/libsupc++/eh_terminate.cc:57
#5  0x00007ffff2cc8062 in __cxxabiv1::__cxa_allocate_exception (thrown_size=136, thrown_size@entry=8) at /build/gcc/src/gcc/libstdc++-v3/libsupc++/eh_alloc.cc:278
#6  0x00007ffff2cc9a98 in operator new (sz=32) at /build/gcc/src/gcc/libstdc++-v3/libsupc++/new_op.cc:54
#7  0x00007ffff57ab4c8 in tensorflow::monitoring::Counter<2>* tensorflow::monitoring::Counter<2>::New<char const (&) [46], char const (&) [58], char const (&) [11], char const (&) [7]>(char const (&) [46], char c
onst (&) [58], char const (&) [11], char const (&) [7]) () from /usr/lib/libtensorflow.so
#8  0x00007ffff571a50b in ?? () from /usr/lib/libtensorflow.so
#9  0x00007ffff7de94fa in call_init.part () from /lib64/ld-linux-x86-64.so.2
#10 0x00007ffff7de960b in _dl_init () from /lib64/ld-linux-x86-64.so.2
#11 0x00007ffff7ddadaa in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
#12 0x0000000000000001 in ?? ()
#13 0x00007fffffffe846 in ?? ()
#14 0x0000000000000000 in ?? ()

Merge with tensorflux

The tensorflow and tensorflux crates strive toward the same goal, and it would be great to unite the efforts. I though we could discuss this in this issue.

I propose looking at the APIs and implementations of the two crates in order to see what could be done. Currently, tensorflux offers the workflow shown below; the code can be found and run in tensorflux’s repository. It’s the example given on the introduction page of TensorFlow.

use tensorflux::{Buffer, Input, Options, Output, Session, Target, Tensor};

macro_rules! ok(($result:expr) => ($result.unwrap()));

let (w, b, n, steps) = (0.1, 0.3, 100, 201);
let (x, y) = generate(w, b, n);

let graph = "regression.pb"; // y = w * x + b
let mut session = ok!(Session::new(&ok!(Options::new())));
ok!(session.extend(&ok!(Buffer::load(graph))));

let inputs = vec![
    Input::new("x", ok!(Tensor::new(x, &[n]))),
    Input::new("y", ok!(Tensor::new(y, &[n]))),
];
let targets = vec![Target::new("init")];
ok!(session.run(&inputs, &mut [], &targets, None, None));

let targets = vec![Target::new("train")];
for _ in 0..steps {
    ok!(session.run(&inputs, &mut [], &targets, None, None));
}

let mut outputs = vec![Output::new("w"), Output::new("b")];
ok!(session.run(&[], &mut outputs, &[], None, None));

let w_hat = ok!(outputs[0].get::<f32>())[0];
let b_hat = ok!(outputs[1].get::<f32>())[0];

assert!((w_hat - w).abs() < 1e-3);
assert!((b_hat - b).abs() < 1e-3);

Please let me know what you think. Thank you.

Regards,
Ivan

getting "image not found" when using with pre-installed tensorflow

macOS 10.12.6
I have /usr/local/lib/libtensorflow.dylib in place and use it with Haskell TensorFlow bindings without any problem.
(installed by: https://github.com/tensorflow/haskell/blob/master/tools/install_macos_dependencies.sh)

I built an example project without problem:

[labrax:projects/Rust/tftest] pbgc% cargo build
Updating registry https://github.com/rust-lang/crates.io-index
Downloading tensorflow v0.4.0
Downloading tensorflow-sys v0.8.0
Downloading tar v0.4.13
Downloading semver v0.5.1
Downloading curl v0.4.8
Downloading pkg-config v0.3.9
Downloading filetime v0.1.10
Downloading semver-parser v0.6.2
Downloading gcc v0.3.53
Downloading curl-sys v0.3.14
Downloading socket2 v0.2.2
Downloading libz-sys v1.0.16
Downloading xattr v0.1.11
Compiling num-traits v0.1.40
Compiling pkg-config v0.3.9
Compiling libc v0.2.29
Compiling lazy_static v0.2.8
Compiling gcc v0.3.53
Compiling cfg-if v0.1.2
Compiling regex-syntax v0.3.9
Compiling utf8-ranges v0.1.3
Compiling winapi v0.2.8
Compiling winapi-build v0.1.1
Compiling kernel32-sys v0.2.2
Compiling memchr v0.1.11
Compiling filetime v0.1.10
Compiling xattr v0.1.11
Compiling socket2 v0.2.2
Compiling aho-corasick v0.5.3
Compiling num-complex v0.1.40
Compiling tar v0.4.13
Compiling thread-id v2.0.0
Compiling thread_local v0.2.7
Compiling miniz-sys v0.1.9
Compiling libz-sys v1.0.16
Compiling curl-sys v0.3.14
Compiling curl v0.4.8
Compiling regex v0.1.80
Compiling flate2 v0.2.19
Compiling semver-parser v0.6.2
Compiling semver v0.5.1
Compiling tensorflow-sys v0.8.0
Compiling tensorflow v0.4.0
Compiling tftest v0.1.0 (file:///Users/pbgc/Development/projects/Rust/tftest)
Finished dev [unoptimized + debuginfo] target(s) in 22.32 secs

when i try to run it I get:

[labrax:projects/Rust/tftest] pbgc% target/debug/tftest
dyld: Library not loaded: bazel-out/local-opt/bin/tensorflow/libtensorflow.so
Referenced from: /Users/pbgc/Development/projects/Rust/tftest/target/debug/tftest
Reason: image not found
Abort

Add support for `TF_WhileParams`.

TF_WhileParams and related functions are the only way to build while control flow loops right now without messing without messing with protobuf defined ops.

Update build steps in README

Files have moved in TensorFlow recently, and the README needs to be updated. Also add that users may not need to manually build TensorFlow.

Cast between unsigned and signed integers in Tensor

Tensor:: new_with_buffer casts *const usize to *const int64_t when feeding dimensions to TF_NewTensor. I just wanted to confirm that it’s done intentionally (for performance reasons or otherwise) as it could lead to surprising results in some cases. Thank you.

A side note: in Python, one can write -1 for one of the dimensions so that TensorFlow figures out by itself what that dimension should be. I wonder whether it works the same way through the C API and, if it does, whether the crate would like to preserve this behavior.

Regards,
Ivan

Examples

Sorry for being late to the party. I'd love to add some working examples and create a examples/ directory. Sound good? Got any pointers before I get started?

Potential use-after-free in SessionOptions

Currently, SessionOptions::set_target looks as follows:

pub fn set_target(&mut self, target: &str) -> std::result::Result<(), NulError> {
  let cstr = try!(CString::new(target));
  unsafe {
    tf::TF_SetTarget(self.inner, cstr.as_ptr());
  }
  Ok(())
}

So cstr is dropped when set_target returns. However, TF_SetTarget doesn’t copy the string; it just stores the pointer in an internal structure:

void TF_SetTarget(TF_SessionOptions* options, const char* target) {
  options->options.target = target;
}

Regards,
Ivan

Cannot compile on Arch Linux due to missing swig.

I installed bazel from the AUR, succesfully, created an empty project with tensorflow = "*" as the only dependency, did a cargo build -j 1 and this is the output.

ivegotasthma@dev irekt $ cargo build -j 1
    Updating registry `https://github.com/rust-lang/crates.io-index`
 Downloading tensorflow v0.1.0
 Downloading tensorflow-sys v0.5.0
 Downloading num-complex v0.1.35
   Compiling num-traits v0.1.36
   Compiling num-complex v0.1.35
   Compiling libc v0.2.17
   Compiling pkg-config v0.3.8
   Compiling tensorflow-sys v0.5.0
error: failed to run custom build command for `tensorflow-sys v0.5.0`
process didn't exit successfully: `/home/ivegotasthma/work/projects/irekt/target/debug/build/tensorflow-sys-f5d6b3977d411f10/build-script-build` (exit code: 101)
--- stdout
libtensorflow-sys/build.rs:31: output = "/home/ivegotasthma/work/projects/irekt/target/debug/build/tensorflow-sys-f5d6b3977d411f10/out"
libtensorflow-sys/build.rs:33: source = "/home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/target/source-0.10.0"
libtensorflow-sys/build.rs:35: lib_dir = "/home/ivegotasthma/work/projects/irekt/target/debug/build/tensorflow-sys-f5d6b3977d411f10/out/lib-0.10.0"
libtensorflow-sys/build.rs:39: Creating directory "/home/ivegotasthma/work/projects/irekt/target/debug/build/tensorflow-sys-f5d6b3977d411f10/out/lib-0.10.0"
libtensorflow-sys/build.rs:43: library_path = "/home/ivegotasthma/work/projects/irekt/target/debug/build/tensorflow-sys-f5d6b3977d411f10/out/lib-0.10.0/libtensorflow_c.so"
libtensorflow-sys/build.rs:48: target_path = "tensorflow/libtensorflow_c.so"
libtensorflow-sys/build.rs:102: Executing "git" "clone" "--branch=v0.10.0" "--recursive" "https://github.com/tensorflow/tensorflow.git" "/home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/target/source-0.10.0"
libtensorflow-sys/build.rs:106: Command "git" "clone" "--branch=v0.10.0" "--recursive" "https://github.com/tensorflow/tensorflow.git" "/home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/target/source-0.10.0" finished successfully
libtensorflow-sys/build.rs:77: Checking build file "/home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/target/source-0.10.0/tensorflow/BUILD"
libtensorflow-sys/build.rs:84: Patching build file "/home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/target/source-0.10.0/tensorflow/BUILD"
libtensorflow-sys/build.rs:102: Executing "./configure"
No Google Cloud Platform support will be enabled for TensorFlow
Can't find swig.  Ensure swig is in $PATH or set $SWIG_PATH.

--- stderr
Cloning into '/home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/target/source-0.10.0'...
Note: checking out 'c715c3102df1556fc0ce88fc987440a3c80e5380'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

thread 'main' panicked at 'failed to execute "./configure"', /home/ivegotasthma/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.5.0/build.rs:104
note: Run with `RUST_BACKTRACE=1` for a backtrace.

ivegotasthma@dev irekt $ 

DataType mismatch on simple addition

Apologies if this is a dumb mistake, but I can't make a simple addition work for some reason.

Here is the python model:

import tensorflow as tf

a = tf.placeholder(tf.int32, name = 'a')
b = tf.placeholder(tf.int32, name = 'b')
add = tf.add(a, b, name = 'add')

tf.initialize_variables(tf.all_variables(), name = 'init')

definition = tf.Session().graph_def
tf.train.write_graph(definition, 'models', 'arithmetic.pb', as_text=False)

and I'm calling it with this rust code:

extern crate tensorflow;

use tensorflow::{Session, SessionOptions, Step, Tensor};
use std::fs::File;
use std::io::Read;


fn main() {
    let filename = "examples/models/arithmetic.pb";

    let mut a = Tensor::new(&[1]);
    a[0] = 3i32;

    let mut b = Tensor::new(&[1]);
    b[0] = 3i32;

    // Initialize Session and load the model.
    let mut session = Session::new(&SessionOptions::new()).unwrap();
    let mut model = Vec::new();
    File::open(filename).unwrap().read_to_end(&mut model).unwrap();
    session.extend_graph(&model).unwrap();

    // Set A and B values
    let mut init_step = Step::new();
    init_step.add_input("a", &a).unwrap();
    init_step.add_input("b", &b).unwrap();
    init_step.add_target("init").unwrap();
    session.run(&mut init_step).unwrap();

    // Ask for the output and run
    let mut out_step = Step::new();
    let add = out_step.request_output("add").unwrap();
    session.run(&mut out_step).unwrap();

    // Print the result
    let result: i32 = out_step.take_output(add).unwrap().data()[0];
    println!("{:?}", result);
}

I tried different combination and data types, but I always get this error:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: {inner:0x7ff3e8d085b0, InvalidArgument: You must feed a value for placeholder tensor 'a' with dtype int32
     [[Node: a = Placeholder[dtype=DT_INT32, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]}', ../src/libcore/result.rs:788
stack backtrace:
   1:        0x1020f4b59 - std::sys::backtrace::tracing::imp::write::h00e948915d1e4c72
   2:        0x1020f6c40 - std::panicking::default_hook::_{{closure}}::h7b8a142818383fb8
   3:        0x1020f60e0 - std::panicking::default_hook::h41cf296f654245d7
   4:        0x1020f66a6 - std::panicking::rust_panic_with_hook::h4cbd7ca63ce1aee9
   5:        0x1020f64f4 - std::panicking::begin_panic::h93672d0313d5e8e9
   6:        0x1020f6452 - std::panicking::begin_panic_fmt::hd0daa02942245d81
   7:        0x1020f63b7 - rust_begin_unwind
   8:        0x10211c8a0 - core::panicking::panic_fmt::hbfc935564d134c1b
   9:        0x1020e24bc - core::result::unwrap_failed::hb73f4a5c3aa2e2c8
  10:        0x1020dff0e - _<core..result..Result<T, E>>::unwrap::hbc6e44d6fbbca03a
  11:        0x1020e510b - arithmetic::main::h312dd8515f8f6ba1
  12:        0x1020f722a - __rust_maybe_catch_panic
  13:        0x1020f5bae - std::rt::lang_start::h53bf99b0829cc03c
  14:        0x1020e5469 - main
error: Process didn't exit successfully: `target/debug/examples/arithmetic` (exit code: 101)

Any idea what's going wrong here?

Add automatic differentiation

Many optimization algorithms require derivatives, and support for automatic differentiation would make using them much simpler.

Call to communicate/ Call to action

It's really nice when on the front page it's super obvious how to get involved. I'm happy to make doc changes, and I assume I don't need to sign a CA for README changes. What is this project's preferred MO for communication? Do we have an irc channel? Can I start one on mozilla?

RFC: Signature of Session::run

Overview

The API of Session::run needs to be decided.

Alternatives

Step

The current API defines an auxiliary type Step which holds all the arguments to be passed to the underlying call to TF_Run, and the signature of Session::run is run(&mut self, step: &mut Step) -> Result<()>.

The benefit of Step is that vectors of string pointers don't have to be created every time Session::run is called. If we stop cloning the tensors to hide the fact that TensorFlow consumes them (which is another question), then Steps would no longer be reusable, and that benefit would at least partially go away. Step is more future-proof, because if the signature of TF_Run changed (or if they introduced TF_Run2), code using Step wouldn't break. It also introduces fewer types and fewer auxiliary objects.

Example usage:

let mut train_step = Step::new();
train_step.add_input("x", &x_tensor).unwrap();
train_step.add_input("y", &y_tensor).unwrap();
train_step.add_target("train").unwrap();
for _ in 0..steps {
  session.run(&mut train_step).unwrap();
}

Input, Output, and Target

It has been suggested than instead of Step, there should be Input, Output, and Target types, and the signature of Session::run will be run(&mut self, inputs: &[Input], outputs: &mut [Output], targets: &[Target], options: Option<&Buffer>, metadata: Option<&mut Buffer>) -> Result<()>.

The overhead of creating vectors of string pointers in each call to run is probably minimal, especially compared to the cost of the actual computation inside TF_Run. This signature of run may also be viewed as more user-friendly.

Example usage:

let inputs = vec![
  Input::new("x", ok!(Tensor::new(x, &[n]))),
  Input::new("y", ok!(Tensor::new(y, &[n]))),
];
let targets = vec![Target::new("train")];
for _ in 0..steps {
  ok!(session.run(&inputs, &mut [], &targets, None, None));
}

Meta

The comment period will close on July 30, 2016.

mac lacks `memalign`

It seems that mac os x lacks the function memalign called in src/buffer.rs. and the compilation would result as error: unresolved name libc::memalign [E0425].

Travis build fails because of corrupt Bazel installation

The build often fails with:

Uncompressing......Error: corrupt installation: file '/home/travis/.cache/bazel/_bazel_travis/install/eb8f56e396a53e354febafe5175189a2/_embedded_binaries/embedded_tools/third_party/py/gflags/init.py' missing. Please remove '/home/travis/.cache/bazel/_bazel_travis/install/eb8f56e396a53e354febafe5175189a2' and try again.

Clearing the cache and rebuilding fixes the error. Removing ~/.cache/bazel from the Travis cache should permanently fix the problem.

expressions example

The expressions example (examples/expressions.rs) won't compile which also breaks cargo test

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.