hkust-sing / tensorflow Goto Github PK
View Code? Open in Web Editor NEWThis project forked from tensorflow/tensorflow
Computation using data flow graphs for scalable machine learning
Home Page: http://tensorflow.org
License: Apache License 2.0
This project forked from tensorflow/tensorflow
Computation using data flow graphs for scalable machine learning
Home Page: http://tensorflow.org
License: Apache License 2.0
NOTE: Only file GitHub issues for bugs and feature requests. All other topics will be closed.
For general support from the community, see StackOverflow.
To make bugs and feature requests more easy to find and organize, we close issues that are deemed
out of scope for GitHub Issues and point people to StackOverflow.
For bugs or installation issues, please provide the following information.
The more information you provide, the more easily we will be able to offer
help and advice.
Operating System:
Installed version of CUDA and cuDNN:
(please attach the output of ls -l /path/to/cuda/lib/libcud*
):
If installed from binary pip package, provide:
python -c "import tensorflow; print(tensorflow.__version__)"
.If installed from source, provide
git rev-parse HEAD
)bazel version
(If logs are large, please upload as attachment or provide link).
For the RDMA version of tensorflow, current problem is binding parameter-server to CPUs instead of GPUs. If we run the program like:
CUDA_VISIBLE_DEVICES="" python AutoencoderRunner.py --job_name="ps" --task_index=0 >> $dir/output-ps1 &
and start the workers using correct options, the parameter servers (as far as I tested, randomly one of them) would report:
Check failed: (buffer_size == size_ && rm.data_type_ != DT_STRING) || (buffer_size <= size_ && rm.data_type_ == DT_STRING) tensor and buffer size do not agree! buffer_size = 709 requested tensor size = 593Tensor<type: int64 shape: [0,1] values: >
Complete log is attached below.
However, if we enable using GPUs for parameter-servers, the bug disappears and program runs normally. Binding with CPU-parameterserver and GPU-worker group is tested on official TF-1.0 and worked.
This is the output on parameter servers, workers just output normally, only not showing if it started to work.
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
E tensorflow/stream_executor/cuda/cuda_driver.cc:509] failed call to cuInit: CUDA_ERROR_NO_DEVICE
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: ip-192-168-2-203
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: ip-192-168-2-203
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 375.26.0
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:363] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 375.26 Thu Dec 8 18:36:43 PST 2016
GCC version: gcc version 4.9.2 (Debian 4.9.2-10)
"""
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 375.26.0
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 375.26.0
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:200] Initialize GrpcChannelCache for job ps -> {0 -> localhost:12300}
I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:200] Initialize GrpcChannelCache for job worker -> {0 -> 10.40.199.203:12200, 1 -> 10.40.199.203:12201}
I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:241] Started server with target: grpc://localhost:12300
I tensorflow/core/distributed_runtime/rdma/rdma_mgr.cc:38] connecting to remote node /job:worker/replica:0/task:1
I tensorflow/core/distributed_runtime/rdma/rdma.cc:515] channel already connected
I tensorflow/core/distributed_runtime/rdma/rdma_mgr.cc:38] connecting to remote node /job:worker/replica:0/task:0
I tensorflow/core/distributed_runtime/rdma/rdma.cc:515] channel already connected
F tensorflow/core/distributed_runtime/rdma/rdma.cc:765] Check failed: (buffer_size == size_ && rm.data_type_ != DT_STRING) || (buffer_size <= size_ && rm.data_type_ == DT_STRING) tensor and buffer size do not agree! buffer_size = 709 req\
uested tensor size = 593Tensor<type: int64 shape: [0,1] values: >
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.