packtpublishing / graph-machine-learning Goto Github PK
View Code? Open in Web Editor NEWGraph Machine Learning, published by Packt
License: MIT License
Graph Machine Learning, published by Packt
License: MIT License
I tried to run the code "01_Shallow_Embeddings.ipynb" unfortunately when I try to run this excerpt :
from gem.embedding.gf import GraphFactorization
G = nx.barbell_graph(m1=10, m2=4)
draw_graph(G)
gf = GraphFactorization(d=2, data_set=None,max_iter=10000, eta=1*10**-4, regu=1.0)
gf.learn_embedding(G)
I have the following error :
./gf not found. Reverting to Python implementation. Please compile gf, place node2vec in the path and grant executable permission
Iter id: 0, Objective: 95.0097, f1: 95.0035, f2: 0.00623775
The error is located specifically with this instruction: gf.learn_embedding(G)
follow the code with
`from gem.embedding.gf import GraphFactorization
G = nx.barbell_graph(m1=10, m2=4)
draw_graph(G)
gf = GraphFactorization(d=2, data_set=None,max_iter=10000, eta=1*10**-4, regu=1.0)
gf.learn_embedding(G)`
and got reply
[Errno 13] Permission denied: 'gem/c_exe/gf' ./gf not found. Reverting to Python implementation. Please compile gf, place node2vec in the path and grant executable permission Iter id: 0, Objective: 95.0047, f1: 95.001, f2: 0.00377086
seems there's something I missed?
Dear authors,
I like your book very much and it helped me get a lot of useful insights, but I have a problem when I do the implementation.
I ran the codes of Graph Classification using GCNs in Chapter04/04_Graph_Neural_Networks.ipynb, but when I try to train the model using the command history = model.fit(
train_gen, epochs=epochs, verbose=1, validation_data=test_gen, shuffle=True,
)
I got an error:
UnimplementedError: Cast string to float is not supported
[[node binary_crossentropy/Cast (defined at :2) ]] [Op:__inference_train_function_2594]
Function call stack:
train_function
It seems like the model used a string, but I'm not very good at python and I'm not able to fix the bug, could you please help me with this? Thank you very much!
import networkx as nx
from node2vec import Node2Vec
G = nx.barbell_graph(m1=7, m2=4)
draw_graph(G, nx.spring_layout(G))
node2vec = Node2Vec(G, dimensions=2)
model = node2vec.fit(window=10)
TypeError Traceback (most recent call last)
in ()
6
7 node2vec = Node2Vec(G, dimensions=2)
----> 8 model = node2vec.fit(window=10)
/usr/local/lib/python3.7/dist-packages/node2vec/node2vec.py in fit(self, **skip_gram_params)
186 skip_gram_params['sg'] = 1
187
--> 188 return gensim.models.Word2Vec(self.walks, **skip_gram_params)
TypeError: init() got an unexpected keyword argument 'size'
Hello,
I am trying to execute examples from Graph-Machine-Learning/Chapter07/01_nlp_graph_creation.ipynb in Google Colab.
At line number#5 corpus = pd.DataFrame([..])
I am getting error as :
---------------------------------------------------------------------------
LookupError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/nltk/corpus/util.py in __load(self)
79 except LookupError as e:
---> 80 try: root = nltk.data.find('{}/{}'.format(self.subdir, zip_name))
81 except LookupError: raise e
5 frames
LookupError:
**********************************************************************
Resource reuters not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('reuters')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/nltk_data'
- '/usr/lib/nltk_data'
**********************************************************************
During handling of the above exception, another exception occurred:
LookupError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/nltk/data.py in find(resource_name, paths)
671 sep = '*' * 70
672 resource_not_found = '\n%s\n%s\n%s\n' % (sep, msg, sep)
--> 673 raise LookupError(resource_not_found)
674
675
LookupError:
**********************************************************************
Resource reuters not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('reuters')
Searched in:
- '/root/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/nltk_data'
- '/usr/lib/nltk_data'
**********************************************************************
Even after following instructions to nltk.download('reuters')
I am still getting the same error. Reuters is download in /root/
~/nltk_data/corpora# ls
reuters.zip
Could you please help me?
Thanks,
Hello,
I am using conda. I created a new environment called "new_env" with python=3.7.16 (I had to use an older version of Python because stellargraph was having difficulties with python>3.8) by:
conda create -n new_env python=3.7.16
conda activate new_env
then in the terminal where the new_env being active, I typed:
pip install -r requirements.txt
where the requirements.txt file include the following (from Chapter-3 in the book):
Jupyter==1.0.0
networkx==2.5
matplotlib==3.2.2
karateclub==1.0.19
node2vec==0.3.3
tensorflow==2.4.0
scikit-learn==0.24.0
git+https://github.com/palash1992/GEM.git
git+https://github.com/stellargraph/stellargraph.git
I have two questions.
Question-1: When I run the "01_Shallow_Embeddings.ipynb" notebook given in Chapter-03, for the cell given below, I get the following error in GraphFactorization section:
from gem.embedding.gf import GraphFactorization
G = nx.barbell_graph(m1=10, m2=4)
draw_graph(G)
gf = GraphFactorization(d=2, data_set=None,max_iter=10000, eta=1*10**-4, regu=1.0)
gf.learn_embedding(G)
Error: ./gf not found. Reverting to Python implementation. Please compile gf, place node2vec in the path and grant executable permission.
Do I go to site_packages in the active conda environment and compile/run a setup.py file in GEM library? Or do I need to clone GEM library (as well as stellar) separate;y rather than putting them in the requirements.txt file, though I thought it is the same procedure.
Question-2: I have also an error when I run the DeepWalk example in the same notebook,
The code:
import networkx as nx
from karateclub.node_embedding.neighbourhood.deepwalk import DeepWalk
G = nx.barbell_graph(m1=10, m2=4)
draw_graph(G)
dw = DeepWalk(dimensions=2)
dw.fit(G)
The error I get:
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_22368\195704241.py in <module>
6
7 dw = DeepWalk(dimensions=2)
----> 8 dw.fit(G)
~\anaconda3\envs\gml_book_ch3\lib\site-packages\karateclub\node_embedding\neighbourhood\deepwalk.py in fit(self, graph)
57 min_count=self.min_count,
58 workers=self.workers,
---> 59 seed=self.seed)
60
61 num_of_nodes = graph.number_of_nodes()
TypeError: __init__() got an unexpected keyword argument 'iter'
Did anyone have similar issues or know the solution?
Thanks!
I had an issue with numpy
and gensim
for chapter 3 (first chapter that I ran examples). This was the error specifically:
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject "gensim"
My solution was to use a conda environment. I set up an environment.yml
file as below
name: packt_graphml_env
channels:
- conda-forge
- defaults
dependencies:
- python==3.8.*
- pip
- ipykernel
- pandas
- gensim==3.8.3
- numpy==1.19.5
- networkx==2.5.*
- matplotlib==3.2.*
- node2vec==0.3.*
- karateclub==1.0.*
- scipy==1.6.*
- tensorflow==2.4.1
- scikit-learn==0.24.*
- stellargraph::stellargraph
- pip:
#- "--editable=git+https://github.com/stellargraph/stellargraph.git#egg=stellargraph"
- "--editable=git+https://github.com/palash1992/GEM.git#egg=GEM"
and install/update with:
mamba env update -f environment.yml
Of course, replace mamba
with conda
if you don't have mamba
. And I added ipykernel
for a Jupyter kernel, which was already installed in my base
conda environment.
Hope this helps someone.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.