hanjun-dai / graphnn Goto Github PK
View Code? Open in Web Editor NEWTraining computational graph on top of structured data (string, graph, etc)
License: MIT License
Training computational graph on top of structured data (string, graph, etc)
License: MIT License
Hi, I am trying to implement it. However, in the step of "building static library". I encounter a problem. When I type "make", the computer cannot find the file thrust/device_vector.h.
Also you wrote "modify configurations in make_common", you mean we go into the file to change the configurations right? You do not mean I literally type in "modify configurations in make_common" right? (I tried to do that, the shell gives me a message "modify: command not found")
I tried to run the make, but it gives the error of cannot found mkl.h
Is it missing? Or should we create it?
Here is the error I got
g++ -Wall -O3 -std=c++11 -I/usr/local/cuda/include -I/home/abcde/anaconda2/lib/include -Iinclude -fPIC -DUSE_GPU -MMD -c -o build/objs/cxx/tensor/cpu_dense_tensor.o src/tensor/cpu_dense_tensor.cpp In file included from src/tensor/cpu_dense_tensor.cpp:5:0: include/tensor/mkl_helper.h:4:17: fatal error: mkl.h: No such file or directory compilation terminated. Makefile:53: recipe for target 'build/objs/cxx/tensor/cpu_dense_tensor.o' failed make: *** [build/objs/cxx/tensor/cpu_dense_tensor.o] Error 1
I have problems in Build Static Library section,
modify configurations in make_common file
make -j8
these two commands do not work: do not find the modify command, what's the problem?
hi, when i make your latest version code, i meet an error
Makefile:21: recipe for target 'build/mnist' failed
make: *** [build/mnist] Error 1
can you tell tell me how to fix this? Thanks.
src/nn/row_selection.cpp:3:21: fatal error: tbb/tbb.h: 没有那个文件或目录
#include "tbb/tbb.h"
^
compilation terminated.
Makefile:53: recipe for target 'build/objs/cxx/nn/row_selection.o' failed
error in open: /usr/local/cuda/bin/../nvvm/libdevice/libdevice.compute_50.10.bc
No such file or directory
Makefile:47: recipe for target 'build/objs/cuda/nn/cross_entropy.o' failed
make: *** [build/objs/cuda/nn/cross_entropy.o] Error 1
I am getting the following errors when running make -j8
/usr/include/tbb/concurrent_vector.h(667): error: ambiguous "?" operation: second operand of type "tbb::internal::concurrent_vector_base_v3::size_type" can be converted to third operand type "tbb::atomic", and vice versa
/usr/include/tbb/concurrent_vector.h(680): error: ambiguous "?" operation: second operand of type "tbb::internal::concurrent_vector_base_v3::size_type" can be converted to third operand type "tbb::atomic", and vice versa
2 errors detected in the compilation of "/tmp/tmpxft_00000897_00000000-4_gpu_handle.cpp4.ii".
make: *** [build/objs/cuda/tensor/gpu_handle.o] Error 2
Is there some potential compatability issues of cuda or centos ?
I am trying to build this package on my virtual machine:
Here is the details of the error message I get:
g++ -Wall -O3 -std=c++11 -I/include -I/data/util/intel/mkl/include -I/data/util/intel/tbb/include -Iinclude -fPIC -MMD -c -o build_cpuonly/objs/cxx/nn/one_hot.o src/nn/one_hot.cpp
g++ -Wall -O3 -std=c++11 -I/include -I/data/util/intel/mkl/include -I/data/util/intel/tbb/include -Iinclude -fPIC -MMD -c -o build_cpuonly/objs/cxx/nn/variable.o src/nn/variable.cpp
g++ -Wall -O3 -std=c++11 -I/include -I/data/util/intel/mkl/include -I/data/util/intel/tbb/include -Iinclude -fPIC -MMD -c -o build_cpuonly/objs/cxx/nn/square_error.o src/nn/square_error.cpp
g++ -Wall -O3 -std=c++11 -I/include -I/data/util/intel/mkl/include -I/data/util/intel/tbb/include -Iinclude -fPIC -MMD -c -o build_cpuonly/objs/cxx/nn/is_equal.o src/nn/is_equal.cpp
src/nn/one_hot.cpp: In instantiation of ‘void gnn::OneHot<mode, Dtype>::Forward(std::vector<std::shared_ptrgnn::Variable >&, std::vector<std::shared_ptrgnn::Variable >&, gnn::Phase) [with mode = gnn::CPU; Dtype = float]’:
src/nn/one_hot.cpp:54:1: required from here
src/nn/one_hot.cpp:43:9: error: ‘memcpy’ was not declared in this scope
memcpy(output.data->row_ptr, idxes.data(), sizeof(int) * idxes.size());
^
src/nn/one_hot.cpp:44:9: error: ‘memcpy’ was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
memcpy(output.data->col_idx, input.data->ptr, sizeof(int) * input.rows());
^
src/nn/one_hot.cpp:43:9: note: ‘memcpy’ declared here, later in the translation unit
memcpy(output.data->row_ptr, idxes.data(), sizeof(int) * idxes.size());
^
src/nn/one_hot.cpp: In instantiation of ‘void gnn::OneHot<mode, Dtype>::Forward(std::vector<std::shared_ptrgnn::Variable >&, std::vector<std::shared_ptrgnn::Variable >&, gnn::Phase) [with mode = gnn::CPU; Dtype = double]’:
src/nn/one_hot.cpp:54:1: required from here
src/nn/one_hot.cpp:43:9: error: ‘memcpy’ was not declared in this scope
src/nn/one_hot.cpp:44:9: error: ‘memcpy’ was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
memcpy(output.data->col_idx, input.data->ptr, sizeof(int) * input.rows());
^
src/nn/one_hot.cpp:43:9: note: ‘memcpy’ declared here, later in the translation unit
memcpy(output.data->row_ptr, idxes.data(), sizeof(int) * idxes.size());
^
Makefile:53: recipe for target 'build_cpuonly/objs/cxx/nn/one_hot.o' failed
graph_comb/code/s2v_mvc$ ./run_nstep_dqn.sh
Traceback (most recent call last):
File "main.py", line 78, in
api = MvcLib(sys.argv)
File "/graph_comb/code/s2v_mvc/mvc_lib/mvc_lib.py", line 18, in init
arr[:] = args
TypeError: bytes or integer address expected instead of str instance
line 46 in graphnn/src/nn/l2_col_norm.cpp
should it be ElewiseMul?
Hello,
Thank you very much for your very interesting paper and code!
I have been trying to apply your method and in particular, I was hoping to study the output of the Embedded Mean Field algorithm (algorithm 1) of your paper. I admit to being a little lost with all of the classes and types: I was wondering if there was any way of saving each of the node embeddings to a txt file (before they are collapsed for graph embedding)?
Thank you very much for your time and help!
Best regards,
Claire
Hi. I tried to run the graph classification example but I experienced the following error:
Assertion isReady[VarIdx(p)] failed in src/nn/factor_graph.cpp line 173: required variable ReduceMean_21:out_0 is not ready
I'm trying to run the code on Ubuntu 16.04 and I read from another closed issue that it maybe could be the problem.
As volume of data is increasing. I am curious is there any plan for a distributed version of graphnn?
I have tried v4.0.0 and v3.0.0, and both of them cant run properly on example/mnist.
when compiling graphnn with libfmt v3.0.0 got warning:
include/fmt/format.h(2591): warning: statement is unreachable
and while running mnist example later got error:
➜ mnist ✗ make -j
g++ -Wall -O3 -std=c++11 -DUSE_GPU -I/usr/local/cuda/include -I/opt/intel/mkl/include -I../../include -Iinclude -o build/mnist mnist.cpp ../../build/lib/libgnn.a -L../../build/lib -lgnn -lm -L/usr/local/cuda/lib64 -lcudart -lcublas -lcurand -lcusparse -lmkl_rt -lfmt
/usr/bin/ld: cannot find -lfmt
collect2: error: ld returned 1 exit status
Makefile:21: recipe for target 'build/mnist' failed
make: *** [build/mnist] Error 1
I have already install fmt in the directory /usr/local/include. But when I $make, it reported error:‘sprintf’ is not a member of ‘fmt’. What happens and how can I fix it?Thanks a lot if you could share some advice!
^
I tried to run the mnist example but there was an error:
./run.sh
60000 images for training
10000 images for test
testing
Assertion `isReady[VarIdx(p)]` failed in src/nn/factor_graph.cpp line 170: required variable ReduceMean_7:out_0 is not ready
I also found the similar error with graph classification example
Assertion `isReady[VarIdx(p)]` failed in src/nn/factor_graph.cpp line 170: required variable ReduceMean_21:out_0 is not ready
Any idea what the problem is?
In your paper, there are two schemes:
in example of graph_classification
, there is only implementation of mean field
.
loopy belief propagation contains two phases:
I am curious about how to efficiently calculate v_i_j. Unlike mean field
, it simply do matrix multiplication. But loopy belief propagation
is some what complicated and it is not easy because we need to remove neighbor j when we calculate v_i_j. Of course, we can implement it by just set i's neighbor j to zero, when calculate v_i_j, but it is not efficient.
So any hint on that?
hi, when i make your latest version code, i meet an error like:"src/nn/row_selection.cpp:3:21: fatal error: tbb/tbb.h: No such file or directory". So i want to ask you, which version of tbb do you used? could you give me the installation tutorial?
I run the graph_classification example program and can get the results
on accuracy and loss. Is it possible to directly get the predicted result?
The probabilities or the labels of the results. I tried to push
the output into targets by adding
targets.push_back(output);
at https://github.com/Hanjun-Dai/graphnn/blob/master/examples/graph_classification/src/kernel_mean_field.cpp#L61.
When retrieve the results, use
dynamic_cast<TensorTemplate<mode, DENSE, Dtype>*>(t.get())->Serialize(outfile);
at line https://github.com/Hanjun-Dai/graphnn/blob/master/examples/graph_classification/include/nn_common.h#L167.
Also I did some other little changes to make sure the first two elements are scalars and the newly-added element is tensor.
But this does not work well. It can create the outfile. But it is empty inside the file. Is there any suggestions on this? I plan to get the outputs and compute the AUC scores for the classification task.
I tried to build this project but error occurs:
find: -printf: unknown primary or operator
find: -printf: unknown primary or operator
find: -printf: unknown primary or operator
find: -printf: unknown primary or operator
find: -printf: unknown primary or operator
ar rcs build_cpuonly/lib/libgnn.a
ar: no archive members specified
I tried to delete all -printf
in Makefile but another error occurs:
make: *** No rule to make target
build_cpuonly/objs/cxx/src//nn/hit_at_k.o', needed by
build_cpuonly/lib/libgnn.a'. Stop.
I tried to install findutils as suggested by this but it wouldn't work.
I think this is a trivial error that can be easily solved. However it seems that nobody has occurred the same issue.
Can anybody provide some instructions? Thanks!
I have downloaded MKL and modified the make_common file to point to the directory however I am still getting a fatal error
-o build/objs/cxx/nn/tanh.o src/nn/tanh.cpp
g++ -Wall -O3 -std=c++11 -I/usr/local/cuda/include -I/usr/parallel_studio_xe_2020/compilers_and_libraries_2020/linux/mkl/include -I/usr/parallel_studio_xe_2020/compilers_and_libraries_2020/linux/tbb/include -Iinclude -fPIC -DUSE_GPU -MMD -c -o build/objs/cxx/nn/matmul.o src/nn/matmul.cpp
In file included from src/tensor/cpu_row_sparse_tensor.cpp:5:0:
include/tensor/mkl_helper.h:4:17: fatal error: mkl.h: No such file or directory
#include <mkl.h>
^
compilation terminated.
In file included from src/tensor/cpu_dense_tensor.cpp:6:0:
include/tensor/mkl_helper.h:4:17: fatal error: mkl.h: No such file or directory
#include <mkl.h>
^
compilation terminated.
In file included from src/nn/jagged_softmax.cpp:2:0:
include/tensor/mkl_helper.h:4:17: fatal error: mkl.h: No such file or directory
#include <mkl.h>
My make_common file is
INTEL_ROOT := /usr/parallel_studio_xe_2020/compilers_and_libraries_2020/linux
MKL_ROOT = $(INTEL_ROOT)/mkl
TBB_ROOT = $(INTEL_ROOT)/tbb
USE_GPU = 1
/usr/bin/ld: 找不到 -lmkl_rt
/usr/bin/ld: 找不到 -ltbb
collect2: error: ld returned 1 exit status
Hi there,
I'm using Ubuntu 18.04 and planning to install an upper version of CUDA, therefore I have gone in the way to replicate the experiments of this study in a Docker image. After I have built all the workspace and executed ./run.sh
in examples/mnist
I get thrust:system_error unknown error
.
Have you ever received this error in a host machine or in a Docker image?
What is the format of the graph datasets used in the examples?
Is the txt file the adjacency list of the graph?
Thanks
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.