Giter Site home page Giter Site logo

emacski / tensorflow-serving-arm Goto Github PK

View Code? Open in Web Editor NEW
98.0 3.0 16.0 172 KB

TensorFlow Serving ARM - A project for cross-compiling TensorFlow Serving targeting popular ARM cores

License: Apache License 2.0

Shell 17.51% C++ 41.60% Dockerfile 7.47% Starlark 33.41%
tensorflow serving arm aarch64 docker cross-compile armhf armv7 arm64 armv8

tensorflow-serving-arm's Introduction

TensorFlow Serving on ARM

TensorFlow Serving cross-compile project targeting linux on common arm cores from a linux amd64 / x86_64 build host.

Contents

Overview

The basis of this project is to provide an alternative build strategy for tensorflow/serving with the intention of making it relatively easy to cross-build CPU optimized model server docker images targeting common linux arm platforms. Additonally, a set of docker image build targets is maintained and built for some of the popular linux arm platforms and hosted on Docker Hub.

Upstream Project: tensorflow/serving

Docker Images

Hosted on Docker Hub: emacski/tensorflow-serving

Usage Documentation: TensorFlow Serving with Docker

Note: The project images are desinged to be functionally equivalent to their upstream counter part.

Quick Start

On many consumer / developer 64-bit and 32-bit arm platforms you can simply:

docker pull emacski/tensorflow-serving:latest
# or
docker pull emacski/tensorflow-serving:2.6.0

Refer to TensorFlow Serving with Docker for configuration and setting up a model for serving.

Images

emacski/tensorflow-serving:[Tag]

Tag ARM Core Compatability
[Version]-linux_amd64_avx_sse4.2 N/A
[Version]-linux_arm64_armv8-a Cortex-A35 / A53 / A57 / A72 / A73
[Version]-linux_arm64_armv8.2-a Cortex-A55 / A75 / A76
[Version]-linux_arm_armv7-a_neon_vfpv3 Cortex-A8
[Version]-linux_arm_armv7-a_neon_vfpv4 Cortex-A7 / A12 / A15 / A17

Example

# on beaglebone black
docker pull emacski/tensorflow-serving:2.6.0-linux_arm_armv7-a_neon_vfpv3

Aliases

emacski/tensorflow-serving:[Alias]

Alias Tag Notes
[Version]-linux_amd64 [Version]-linux_amd64_avx_sse4.2 default linux_amd64 image
[Version]-linux_arm64 [Version]-linux_arm64_armv8-a Should work on most 64-bit
raspberry pi and compatible
platforms
[Version]-linux_arm [Version]-linux_arm_armv7-a_neon_vfpv4 Should work on most 32-bit
raspberry pi and compatible
platforms
latest-linux_amd64 [Latest-Version]-linux_amd64
latest-linux_arm64 [Latest-Version]-linux_arm64
latest-linux_arm [Latest-Version]-linux_arm

Examples

# on Raspberry PI 3 B+
docker pull emacski/tensorflow-serving:2.6.0-linux_arm64
# or
docker pull emacski/tensorflow-serving:latest-linux_arm64

Manifest Lists

emacski/tensorflow-serving:latest

Image OS Arch
emacski/tensorflow-serving:latest-linux_arm linux arm
emacski/tensorflow-serving:latest-linux_arm64 linux arm64
emacski/tensorflow-serving:latest-linux_amd64 linux amd64

Examples

# on Raspberry PI 3 B+
docker pull emacski/tensorflow-serving
# or
docker pull emacski/tensorflow-serving:latest
# the actual image used is emacski/tensorflow-serving:latest-linux_arm64
# itself actually being emacski/tensorflow-serving:[Latest-Version]-linux_arm64_armv8-a

emacski/tensorflow-serving:[Version]

Image OS Arch
emacski/tensorflow-serving:[Version]-linux_arm linux arm
emacski/tensorflow-serving:[Version]-linux_arm64 linux arm64
emacski/tensorflow-serving:[Version]-linux_amd linux amd64

Example

# on Raspberry PI 3 B+
docker pull emacski/tensorflow-serving:2.6.0
# the actual image used is emacski/tensorflow-serving:2.6.0-linux_arm64
# itself actually being emacski/tensorflow-serving:2.6.0-linux_arm64_armv8-a

Debug Images

As of version 2.0.0, debug images are also built and published to docker hub. These images are identical to the non-debug images with the addition of busybox utils. The utils are located at /busybox/bin which is also included in the image's system PATH.

For any image above, add debug after the [Version] and before the platform suffix (if one is required) in the image tag.

# multi-arch
docker pull emacski/tensorflow-serving:2.6.0-debug
# specific image
docker pull emacski/tensorflow-serving:2.6.0-debug-linux_arm64_armv8-a
# specific alias
docker pull emacski/tensorflow-serving:latest-debug-linux_arm64

Example Usage

# start a new container with an interactive ash (busybox) shell
docker run -ti --entrypoint /busybox/bin/sh emacski/tensorflow-serving:latest-debug-linux_arm64
# with an interactive dash (system) shell
docker run -ti --entrypoint sh emacski/tensorflow-serving:latest-debug-linux_arm64
# start an interactive ash shell in a running debug container
docker exec -ti my_running_container /busybox/bin/sh

Back to Top

Build From Source

Build / Development Environment

Build Host Platform: linux_amd64 (x86_64)

Build Host Requirements:

  • git
  • docker

For each version / release, a self contained build environment devel image is created and published. This image contains all necessary tools and dependencies required for building project artifacts.

git clone [email protected]:emacski/tensorflow-serving-arm.git
cd tensorflow-serving-arm

# pull devel
docker pull emacski/tensorflow-serving:latest-devel
# or build devel
docker build -t emacski/tensorflow-serving:latest-devel -f tensorflow_model_server/tools/docker/Dockerfile .

All of the build examples assume that the commands are executed within the devel container:

# interactive shell
docker run --rm -ti \
    -w /code -v $PWD:/code \
    -v /var/run/docker.sock:/var/run/docker.sock \
    emacski/tensorflow-serving:latest-devel /bin/bash
# or
# non-interactive
docker run --rm \
    -w /code -v $PWD:/code \
    -v /var/run/docker.sock:/var/run/docker.sock \
    emacski/tensorflow-serving:latest-devel [example_command]

Config Groups

The following bazel config groups represent the build options used for each target platform (found in .bazelrc). These config groups should be treated as mutually exclusive with each other and only one should be specified in a build command as a --config option.

Name Type Info
linux_amd64 Base can be used for custom builds
linux_arm64 Base can be used for custom builds
linux_arm Base can be used for custom builds
linux_amd64_avx_sse4.2 Project inherits from linux_amd64
linux_arm64_armv8-a Project inherits from linux_arm64
linux_arm64_armv8.2-a Project inherits from linux_arm64
linux_arm_armv7-a_neon_vfpv3 Project inherits from linux_arm
linux_arm_armv7-a_neon_vfpv4 Project inherits from linux_arm

Build Project Image Target

//tensorflow_model_server:project_image.tar

Build a project maintained model server docker image targeting one of the platforms specified by a project config group as listed above. The resulting image can be found as a tar file in bazel's output directory.

bazel build //tensorflow_model_server:project_image.tar --config=linux_arm64_armv8-a
# or
bazel build //tensorflow_model_server:project_image.tar --config=linux_arm_armv7-a_neon_vfpv4
# each build creates a docker loadable image tar in bazel's output dir

Load Project Image Target

//tensorflow_model_server:project_image

Same as above, but additionally bazel attempts to load the resulting image onto the host, making it immediately available to the host's docker.

Note: host docker must be available to the build container for final images to be available on the host automatically.

bazel run //tensorflow_model_server:project_image --config=linux_arm64_armv8-a
# or
bazel run //tensorflow_model_server:project_image --config=linux_arm_armv7-a_neon_vfpv4

Build Project Binary Target

//tensorflow_model_server

Build the model server binary targeting one of the platforms specified by a project config group as listed above.

Note: It's not recommended to use these binaries as standalone executables as they are built specifically to run in their respective containers, but they may work on debian 10 like systems.

bazel build //tensorflow_model_server --config=linux_arm64_armv8-a
# or
bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv4

Build Image for Custom ARM Target

//tensorflow_model_server:custom_image.tar

Can be used to fine-tune builds for specific platforms. Use a "Base" type config group and custom compile options. For linux_arm64 and linux_arm options see: https://releases.llvm.org/10.0.0/tools/clang/docs/CrossCompilation.html

# building an image tuned for Cortex-A72
bazel build //tensorflow_model_server:custom_image.tar --config=linux_arm64 --copt=-mcpu=cortex-a72
# look for custom_image.tar in bazel's output directory

Back to Top

Legacy Builds

Legacy GitHub Tags (prefixed with v)

  • v1.11.1
  • v1.12.0
  • v1.13.0
  • v1.14.0

Note: a tag exists for both v1.14.0 and 1.14.0 as this was the current upstream tensorflow/serving version when this project was refactored

Legacy Docker Images

The following tensorflow serving versions were built using the legacy project structure and are still available on DockerHub.

  • emacski/tensorflow-serving:[Version]-arm64v8
  • emacski/tensorflow-serving:[Version]-arm32v7
  • emacksi/tensorflow-serving:[Version]-arm32v7_vfpv3

Versions: 1.11.1, 1.12.0, 1.13.0, 1.14.0

Back to Top

Disclosures

This project uses llvm / clang toolchains for c++ cross-compiling. By default, the model server is statically linked to llvm's libc++. To dynamically link against gnu libstdc++, include the build option --config=gnulibcpp.

The base docker images used in this project come from another project I maintain called Discolix (distroless for arm).

Back to Top

Disclaimer

  • Not an ARM expert
  • Not a Bazel expert (but I know a little bit more now)
  • Not a TensorFlow expert
  • Personal project, so testing is minimal

Should any of those scare you, I recommend NOT using anything here. Additionally, any help to improve things is always appreciated.

Back to Top

tensorflow-serving-arm's People

Contributors

emacski avatar grantstephens avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tensorflow-serving-arm's Issues

Inference Speed Unstable Problem

Hi~~
Firstly, I want to say this project is very helpful which helps me deploy my deep learning model on ARM successfully.
Then, there is a problem I met during testing the model inference speed. For a deep learning model, machine translation model, for example, the inference speed is unstable, Here is a detailed description:
1. Translating the same single source sentence to the target sentence is various and no rules to follow. Testing the same sentence for 200 times and there are 3 or 4 times the speed is slower than average.
2. The inference speed and the length of a source sentence is non-linear. Sometimes translating a very long sentence say 30 words will consume a shorter time than a very short sentence which may only have 5 words. While as far as I know, the inference time is related to the length because the model will generate the target word one by one to form the target sentence.
I'm not sure if the problem is related to the TensorFlow-serving-arm image, but I can't find a useful answer on google. Do you come across this problem when deploying the DL model? I would appreciate it very much if you would help me with it.

tensorflow serving arm image can't exec into a running container

I use the image (emacski/tensorflow-serving 1.14.0-linux_arm64_armv8-a) from docker hub.
Running is OK, but can't exec into the container. I try bash、sh all failed,attach is also failed.
Show such wrong message: "exec: "bash": executable file not found in $PATH": unknown.

How to get tensorflow-serving-api

Hi
I am using the docker image emacski/tensorflow-serving:1.15.0. How can I get tensorflow-serving-api for my application (which sends requests to tensorflow-serving)?
Thanks!

Failed to build libevent for arm64 image

Using tag: 2.3.0

I found a problem on commond: cp -R $(pwd)/external/com_github_libevent_libevent/* $TMP_DIR, the source dir is empty.

The error output:

# bazel build --sandbox_debug //tensorflow_model_server:custom_image.tar --config=linux_arm64
WARNING: /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/external/org_tensorflow/tensorflow/core/BUILD:1749:11: in linkstatic attribute of cc_library rule @org_tensorflow//tensorflow/core:lib_internal: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
WARNING: /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/external/org_tensorflow/tensorflow/core/BUILD:2161:16: in linkstatic attribute of cc_library rule @org_tensorflow//tensorflow/core:framework_internal: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'tf_cuda_library', the error might have been caused by the macro implementation
INFO: Analyzed target //tensorflow_model_server:custom_image.tar (1 packages loaded, 42 targets configured).
INFO: Found 1 target...
ERROR: /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/external/com_github_libevent_libevent/BUILD.bazel:52:8: Executing genrule @com_github_libevent_libevent//:libevent-srcs failed (Exit 1): process-wrapper failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/sandbox/processwrapper-sandbox/7773/execroot/com_github_emacski_tensorflowservingarm && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    TMPDIR=/tmp \
  /root/.cache/bazel/_bazel_root/install/c65c191bb5f6bccd5efa290d850a727b/process-wrapper '--timeout=0' '--kill_delay=15' --wait_fix /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; export INSTALL_DIR=$(pwd)/bazel-out/aarch64-opt/bin/external/com_github_libevent_libevent/libevent
export TMP_DIR=$(mktemp -d -t libevent.XXXXXX)
mkdir -p $TMP_DIR
cp -R $(pwd)/external/com_github_libevent_libevent/* $TMP_DIR
cd $TMP_DIR
./autogen.sh
CC=/usr/bin/clang CFLAGS="--target=aarch64-linux-gnu --sysroot=/usr/aarch64-linux-gnu -march=armv8-a -fuse-ld=lld -fPIC -O3" CXXFLAGS=-fPIC ./configure \
   --prefix=$INSTALL_DIR --host=aarch64-linux-gnu \
   --with-sysroot=/usr/aarch64-linux-gnu --enable-shared=no --disable-openssl \
   --disable-libevent-regress --disable-samples
make
make install
rm -rf $TMP_DIR') process-wrapper failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/sandbox/processwrapper-sandbox/7773/execroot/com_github_emacski_tensorflowservingarm && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    TMPDIR=/tmp \
  /root/.cache/bazel/_bazel_root/install/c65c191bb5f6bccd5efa290d850a727b/process-wrapper '--timeout=0' '--kill_delay=15' --wait_fix /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; export INSTALL_DIR=$(pwd)/bazel-out/aarch64-opt/bin/external/com_github_libevent_libevent/libevent
export TMP_DIR=$(mktemp -d -t libevent.XXXXXX)
mkdir -p $TMP_DIR
cp -R $(pwd)/external/com_github_libevent_libevent/* $TMP_DIR
cd $TMP_DIR
./autogen.sh
CC=/usr/bin/clang CFLAGS="--target=aarch64-linux-gnu --sysroot=/usr/aarch64-linux-gnu -march=armv8-a -fuse-ld=lld -fPIC -O3" CXXFLAGS=-fPIC ./configure \
   --prefix=$INSTALL_DIR --host=aarch64-linux-gnu \
   --with-sysroot=/usr/aarch64-linux-gnu --enable-shared=no --disable-openssl \
   --disable-libevent-regress --disable-samples
make
make install
rm -rf $TMP_DIR')
cp: cannot stat '/root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/sandbox/processwrapper-sandbox/7773/execroot/com_github_emacski_tensorflowservingarm/external/com_github_libevent_libevent/*': No such file or directory
Target //tensorflow_model_server:custom_image.tar failed to build
ERROR: /code/tensorflow_model_server/BUILD:93:17 Executing genrule @com_github_libevent_libevent//:libevent-srcs failed (Exit 1): process-wrapper failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/sandbox/processwrapper-sandbox/7773/execroot/com_github_emacski_tensorflowservingarm && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    TMPDIR=/tmp \
  /root/.cache/bazel/_bazel_root/install/c65c191bb5f6bccd5efa290d850a727b/process-wrapper '--timeout=0' '--kill_delay=15' --wait_fix /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; export INSTALL_DIR=$(pwd)/bazel-out/aarch64-opt/bin/external/com_github_libevent_libevent/libevent
export TMP_DIR=$(mktemp -d -t libevent.XXXXXX)
mkdir -p $TMP_DIR
cp -R $(pwd)/external/com_github_libevent_libevent/* $TMP_DIR
cd $TMP_DIR
./autogen.sh
CC=/usr/bin/clang CFLAGS="--target=aarch64-linux-gnu --sysroot=/usr/aarch64-linux-gnu -march=armv8-a -fuse-ld=lld -fPIC -O3" CXXFLAGS=-fPIC ./configure \
   --prefix=$INSTALL_DIR --host=aarch64-linux-gnu \
   --with-sysroot=/usr/aarch64-linux-gnu --enable-shared=no --disable-openssl \
   --disable-libevent-regress --disable-samples
make
make install
rm -rf $TMP_DIR') process-wrapper failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/933c2f549343e099e3a6911b282cb71c/sandbox/processwrapper-sandbox/7773/execroot/com_github_emacski_tensorflowservingarm && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
    TMPDIR=/tmp \
  /root/.cache/bazel/_bazel_root/install/c65c191bb5f6bccd5efa290d850a727b/process-wrapper '--timeout=0' '--kill_delay=15' --wait_fix /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; export INSTALL_DIR=$(pwd)/bazel-out/aarch64-opt/bin/external/com_github_libevent_libevent/libevent
export TMP_DIR=$(mktemp -d -t libevent.XXXXXX)
mkdir -p $TMP_DIR
cp -R $(pwd)/external/com_github_libevent_libevent/* $TMP_DIR
cd $TMP_DIR
./autogen.sh
CC=/usr/bin/clang CFLAGS="--target=aarch64-linux-gnu --sysroot=/usr/aarch64-linux-gnu -march=armv8-a -fuse-ld=lld -fPIC -O3" CXXFLAGS=-fPIC ./configure \
   --prefix=$INSTALL_DIR --host=aarch64-linux-gnu \
   --with-sysroot=/usr/aarch64-linux-gnu --enable-shared=no --disable-openssl \
   --disable-libevent-regress --disable-samples
make
make install
rm -rf $TMP_DIR')
INFO: Elapsed time: 3.474s, Critical Path: 2.77s
INFO: 0 processes.
FAILED: Build did NOT complete successfully

TF Serving container doesn't handle MODEL_NAME environment variable

Running TF Serving container with test data from https://github.com/tensorflow/serving README:

git clone https://github.com/tensorflow/serving
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata"
docker run -t --rm -p 8501:8501 \
    -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
    -e MODEL_NAME=half_plus_two \
    emacski/tensorflow-serving:1.14.0-linux_arm_armv7-a_neon_vfpv3

shows following error:

2019-11-30 19:00:29.050035: I external/tf_serving/tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config:  model_name: model model_base_path: /models/model
2019-11-30 19:00:29.055819: I external/tf_serving/tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2019-11-30 19:00:29.059314: I external/tf_serving/tensorflow_serving/model_servers/server_core.cc:561]  (Re-)adding model: model
2019-11-30 19:00:29.066273: E external/tf_serving/tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model
2019-11-30 19:00:30.066100: E external/tf_serving/tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model

And the last error repeats.

It happens because the container doesn't handle MODEL_NAME environment variable. Mapping the model data to /models/model solves the issue.

I think it would be right to support it, so newbies like me could launch TF Serving example using sample from mainline README. We love to cut-and-paste )

standard_init_linux.go:211 when I try to cross compile from amd64

I get this error and I don't know why.

standard_init_linux.go:211: exec user process caused "exec format error"

I tried to build arm32v7 and also arm64v8. I'm running ubuntu disco dingo. I get to the point where 5041 processes have completed and the message on the screen just before this error is below:

Removing intermediate container ef2fe797a8b0
---> bf1f28450744
Step 15/22 : FROM ${ARCH}/ubuntu:bionic as tensorflow_model_server
bionic: Pulling from arm32v7/ubuntu
5379ca036368: Pull complete
4ede4c7641a5: Pull complete
0994f5ac8c79: Pull complete
a81b96316730: Pull complete
Digest: sha256:46fe74f7b605368593fd21ff9db45429d9faa86550ad13f7eefb2c995c69b271
Status: Downloaded newer image for arm32v7/ubuntu:bionic
---> 6348795f7982
Step 16/22 : COPY --from=build /tensorflow-serving/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server /usr/bin/tensorflow_model_server
---> ec679480db38
Step 17/22 : EXPOSE 8500
---> Running in 4020fd341139
Removing intermediate container 4020fd341139
---> 3466dc1ccfac
Step 18/22 : EXPOSE 8501
---> Running in 1fd2b5d81e91
Removing intermediate container 1fd2b5d81e91
---> f50abb691752
Step 19/22 : ENV MODEL_BASE_PATH=/models
---> Running in 8834cdb7172a
Removing intermediate container 8834cdb7172a
---> 94c5e7e0aa60
Step 20/22 : RUN mkdir -p ${MODEL_BASE_PATH}
---> Running in 99408f90eaf7

I am interested in a binary for tensorflow_model_server for the armhf processor. I really don't know much about containers.

Unexpected inference result

Hi,

I am using tensorflow-serving docker image 1.15.0 on Jetson Xavier with Faster RCNN for object detection. However the result from inference is incorrect (compared to tensorflow-serving docker image from tensorflow/serving on x86_64)

Do you have any ideas what can be the cause?

Start Tensorflow-serving with model and copy model into container

Hi,

if I start your container with a saved model bound to /models/, I receive the error:
E external/tf_serving/tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:362] FileSystemStoragePathSource encountered a filesystem access error: Could not find base path /models/model for servable model

Even though I did not name my model "model" and didn't specify the path to "models/model". If I do name my model that way, it works as a workaround.

How do I now copy a saved model into /models folder of the container? It does not just let me start the container without binding any folder to /models/model because it cannot find any resource. If I bind it first, copy it afterwards and then create an image, it does not work either:
W external/tf_serving/tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:267] No versions of servable model found under base path /models/model

Adding Decision Forests Op

Hi- thank you so much for this repo- it has been a lifesaver so far.

I am trying to follow these instructions whilst still compiling for arm, but running into some trouble.

I added the needed lines to a new workspace file inf the tensorflow_model_server folder and added the SUPPORTED_TENSORFLOW_OPS to the BUILD file in that folder. It compiled ok and I got a docker image out of it, but the added ops aren't present.

Any help would be much appreciated.

Oh, one last thing- is there a command to compile an image for all architectures, or do you have to manually combine them all at the end?

No Prometheus metrics in serving?

Hello, @emacski .

I am using your repository to implement a TFServing in a RaspberryPi, but apart from being able to make inferences over the models in the serving, I need to obtain the serving metrics too. In the original tensorflow/serving repository, this metrics are exposed as Prometheus metrics (with metrics path in '/monitoring/prometheus/metrics'), but they don't seem to be available when using the ARM serving of this repository.

I've been diving in your repository for a while, but I don't really understand how you make it work: i would suppose you are implementing the same 'tensorflow_model_server' that is used as entrypoint script in the original tensorflow/serving Dockerfiles, but then why do the metrics seem to be unavailable?

Thank you in advance.

Segmentation Fault(core dumpted) on Jetson TX2

Pulled emacski/tensorflow-serving:2.3.0-linux_arm64_armv8-a and launched on Nvidia TX2 board and got this log:

2020-08-26 06:29:12.709413: I external/tf_serving/tensorflow_serving/model_servers/server.cc:87] Building single TensorFlow model file config:  model_name: ssd_mobilenet_v1_coco_2018_01_28 model_base_path: /models/ssd_mobilenet_v1_coco_2018_01_28
2020-08-26 06:29:12.709704: I external/tf_serving/tensorflow_serving/model_servers/server_core.cc:464] Adding/updating models.
2020-08-26 06:29:12.709737: I external/tf_serving/tensorflow_serving/model_servers/server_core.cc:575]  (Re-)adding model: ssd_mobilenet_v1_coco_2018_01_28
2020-08-26 06:29:12.810281: I external/tf_serving/tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: ssd_mobilenet_v1_coco_2018_01_28 version: 1}
2020-08-26 06:29:12.810355: I external/tf_serving/tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: ssd_mobilenet_v1_coco_2018_01_28 version: 1}
2020-08-26 06:29:12.810380: I external/tf_serving/tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: ssd_mobilenet_v1_coco_2018_01_28 version: 1}
2020-08-26 06:29:12.810433: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/ssd_mobilenet_v1_coco_2018_01_28/1
2020-08-26 06:29:12.936243: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-08-26 06:29:12.936321: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:234] Reading SavedModel debug info (if present) from: /models/ssd_mobilenet_v1_coco_2018_01_28/1
2020-08-26 06:29:13.609427: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:199] Restoring SavedModel bundle.
2020-08-26 06:29:14.410158: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:303] SavedModel load for tags { serve }; Status: success: OK. Took 1599716 microseconds.
2020-08-26 06:29:14.475400: I external/tf_serving/tensorflow_serving/servables/tensorflow/saved_model_warmup_util.cc:59] No warmup data file found at /models/ssd_mobilenet_v1_coco_2018_01_28/1/assets.extra/tf_serving_warmup_requests
2020-08-26 06:29:14.551244: I external/tf_serving/tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: ssd_mobilenet_v1_coco_2018_01_28 version: 1}
2020-08-26 06:29:14.556236: I external/tf_serving/tensorflow_serving/model_servers/server.cc:367] Running gRPC ModelServer at 0.0.0.0:8500 ...
[evhttp_server.cc : 238] NET_LOG: Entering the event loop ...
2020-08-26 06:29:14.559873: I external/tf_serving/tensorflow_serving/model_servers/server.cc:387] Exporting HTTP/REST API at:localhost:8501 ...
Segmentation fault (core dumped)

Please help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.