Giter Site home page Giter Site logo

sematch's Introduction

logo


Introduction

Sematch is an integrated framework for the development, evaluation, and application of semantic similarity for Knowledge Graphs (KGs). It is easy to use Sematch to compute semantic similarity scores of concepts, words and entities. Sematch focuses on specific knowledge-based semantic similarity metrics that rely on structural knowledge in taxonomy (e.g. depth, path length, least common subsumer), and statistical information contents (corpus-IC and graph-IC). Knowledge-based approaches differ from their counterpart corpus-based approaches relying on co-occurrence (e.g. Pointwise Mutual Information) or distributional similarity (Latent Semantic Analysis, Word2Vec, GLOVE and etc). Knowledge-based approaches are usually used for structural KGs, while corpus-based approaches are normally applied in textual corpora.

In text analysis applications, a common pipeline is adopted in using semantic similarity from concept level, to word and sentence level. For example, word similarity is first computed based on similarity scores of WordNet concepts, and sentence similarity is computed by composing word similarity scores. Finally, document similarity could be computed by identifying important sentences, e.g. TextRank.

logo

KG based applications also meet similar pipeline in using semantic similarity, from concept similarity (e.g. http://dbpedia.org/class/yago/Actor109765278) to entity similarity (e.g. http://dbpedia.org/resource/Madrid). Furthermore, in computing document similarity, entities are extracted and document similarity is computed by composing entity similarity scores.

kg

In KGs, concepts usually denote ontology classes while entities refer to ontology instances. Moreover, those concepts are usually constructed into hierarchical taxonomies, such as DBpedia ontology class, thus quantifying concept similarity in KG relies on similar semantic information (e.g. path length, depth, least common subsumer, information content) and semantic similarity metrics (e.g. Path, Wu & Palmer,Li, Resnik, Lin, Jiang & Conrad and WPath). In consequence, Sematch provides an integrated framework to develop and evaluate semantic similarity metrics for concepts, words, entities and their applications.


Getting started: 20 minutes to Sematch

Install Sematch

You need to install scientific computing libraries numpy and scipy first. An example of installing them with pip is shown below.

pip install numpy scipy

Depending on different OS, you can use different ways to install them. After sucessful installation of numpy and scipy, you can install sematch with following commands.

pip install sematch
python -m sematch.download

Alternatively, you can use the development version to clone and install Sematch with setuptools. We recommend you to update your pip and setuptools.

git clone https://github.com/gsi-upm/sematch.git
cd sematch
python setup.py install

We also provide a Sematch-Demo Server. You can use it for experimenting with main functionalities or take it as an example for using Sematch to develop applications. Please check our Documentation for more details.

Computing Word Similarity

The core module of Sematch is measuring semantic similarity between concepts that are represented as concept taxonomies. Word similarity is computed based on the maximum semantic similarity of WordNet concepts. You can use Sematch to compute multi-lingual word similarity based on WordNet with various of semantic similarity metrics.

from sematch.semantic.similarity import WordNetSimilarity
wns = WordNetSimilarity()

# Computing English word similarity using Li method
wns.word_similarity('dog', 'cat', 'li') # 0.449327301063
# Computing Spanish word similarity using Lin method
wns.monol_word_similarity('perro', 'gato', 'spa', 'lin') #0.876800984373
# Computing Chinese word similarity using  Wu & Palmer method
wns.monol_word_similarity('狗', '猫', 'cmn', 'wup') # 0.857142857143
# Computing Spanish and English word similarity using Resnik method
wns.crossl_word_similarity('perro', 'cat', 'spa', 'eng', 'res') #7.91166650904
# Computing Spanish and Chinese word similarity using Jiang & Conrad method
wns.crossl_word_similarity('perro', '猫', 'spa', 'cmn', 'jcn') #0.31023804699
# Computing Chinese and English word similarity using WPath method
wns.crossl_word_similarity('狗', 'cat', 'cmn', 'eng', 'wpath')#0.593666388463

Computing semantic similarity of YAGO concepts.

from sematch.semantic.similarity import YagoTypeSimilarity
sim = YagoTypeSimilarity()

#Measuring YAGO concept similarity through WordNet taxonomy and corpus based information content
sim.yago_similarity('http://dbpedia.org/class/yago/Dancer109989502','http://dbpedia.org/class/yago/Actor109765278', 'wpath') #0.642
sim.yago_similarity('http://dbpedia.org/class/yago/Dancer109989502','http://dbpedia.org/class/yago/Singer110599806', 'wpath') #0.544
#Measuring YAGO concept similarity based on graph-based IC
sim.yago_similarity('http://dbpedia.org/class/yago/Dancer109989502','http://dbpedia.org/class/yago/Actor109765278', 'wpath_graph') #0.423
sim.yago_similarity('http://dbpedia.org/class/yago/Dancer109989502','http://dbpedia.org/class/yago/Singer110599806', 'wpath_graph') #0.328

Computing semantic similarity of DBpedia concepts.

from sematch.semantic.graph import DBpediaDataTransform, Taxonomy
from sematch.semantic.similarity import ConceptSimilarity
concept = ConceptSimilarity(Taxonomy(DBpediaDataTransform()),'models/dbpedia_type_ic.txt')
concept.name2concept('actor')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'path')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'wup')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'li')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'res')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'lin')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'jcn')
concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'wpath')

Computing semantic similarity of DBpedia entities.

from sematch.semantic.similarity import EntitySimilarity
sim = EntitySimilarity()
sim.similarity('http://dbpedia.org/resource/Madrid','http://dbpedia.org/resource/Barcelona') #0.409923677282
sim.similarity('http://dbpedia.org/resource/Apple_Inc.','http://dbpedia.org/resource/Steve_Jobs')#0.0904545454545
sim.relatedness('http://dbpedia.org/resource/Madrid','http://dbpedia.org/resource/Barcelona')#0.457984139871
sim.relatedness('http://dbpedia.org/resource/Apple_Inc.','http://dbpedia.org/resource/Steve_Jobs')#0.465991132787

Evaluate semantic similarity metrics with word similarity datasets

from sematch.evaluation import WordSimEvaluation
from sematch.semantic.similarity import WordNetSimilarity
evaluation = WordSimEvaluation()
evaluation.dataset_names()
wns = WordNetSimilarity()
# define similarity metrics
wpath = lambda x, y: wns.word_similarity_wpath(x, y, 0.8)
# evaluate similarity metrics with SimLex dataset
evaluation.evaluate_metric('wpath', wpath, 'noun_simlex')
# performa Steiger's Z significance Test
evaluation.statistical_test('wpath', 'path', 'noun_simlex')
# define similarity metrics for Spanish words
wpath_es = lambda x, y: wns.monol_word_similarity(x, y, 'spa', 'path')
# define cross-lingual similarity metrics for English-Spanish
wpath_en_es = lambda x, y: wns.crossl_word_similarity(x, y, 'eng', 'spa', 'wpath')
# evaluate metrics in multilingual word similarity datasets
evaluation.evaluate_metric('wpath_es', wpath_es, 'rg65_spanish')
evaluation.evaluate_metric('wpath_en_es', wpath_en_es, 'rg65_EN-ES')

Evaluate semantic similarity metrics with category classification

Although the word similarity correlation measure is the standard way to evaluate the semantic similarity metrics, it relies on human judgements over word pairs which may not have same performance in real applications. Therefore, apart from word similarity evaluation, the Sematch evaluation framework also includes a simple aspect category classification. The task classifies noun concepts such as pasta, noodle, steak, tea into their ontological parent concept FOOD, DRINKS.

from sematch.evaluation import AspectEvaluation
from sematch.application import SimClassifier, SimSVMClassifier
from sematch.semantic.similarity import WordNetSimilarity

# create aspect classification evaluation
evaluation = AspectEvaluation()
# load the dataset
X, y = evaluation.load_dataset()
# define word similarity function
wns = WordNetSimilarity()
word_sim = lambda x, y: wns.word_similarity(x, y)
# Train and evaluate metrics with unsupervised classification model
simclassifier = SimClassifier.train(zip(X,y), word_sim)
evaluation.evaluate(X,y, simclassifier)

macro averge:  (0.65319812882333839, 0.7101245049198579, 0.66317566364913016, None)
micro average:  (0.79210167952791644, 0.79210167952791644, 0.79210167952791644, None)
weighted average:  (0.80842645056024054, 0.79210167952791644, 0.79639496616636352, None)
accuracy:  0.792101679528
             precision    recall  f1-score   support

    SERVICE       0.50      0.43      0.46       519
 RESTAURANT       0.81      0.66      0.73       228
       FOOD       0.95      0.87      0.91      2256
   LOCATION       0.26      0.67      0.37        54
   AMBIENCE       0.60      0.70      0.65       597
     DRINKS       0.81      0.93      0.87       752

avg / total       0.81      0.79      0.80      4406

Matching Entities with type using SPARQL queries

You can use Sematch to download a list of entities having a specific type using different languages. Sematch will generate SPARQL queries and execute them in DBpedia Sparql Endpoint.

from sematch.application import Matcher
matcher = Matcher()
# matching scientist entities from DBpedia
matcher.match_type('scientist')
matcher.match_type('científico', 'spa')
matcher.match_type('科学家', 'cmn')
matcher.match_entity_type('movies with Tom Cruise')

Example of automatically generated SPARQL query.

SELECT DISTINCT ?s, ?label, ?abstract WHERE {
    {  
    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://dbpedia.org/class/yago/NuclearPhysicist110364643> . }
 UNION {  
    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://dbpedia.org/class/yago/Econometrician110043491> . }
 UNION {  
    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://dbpedia.org/class/yago/Sociologist110620758> . }
 UNION {  
    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://dbpedia.org/class/yago/Archeologist109804806> . }
 UNION {  
    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://dbpedia.org/class/yago/Neurolinguist110354053> . } 
    ?s <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/2002/07/owl#Thing> . 
    ?s <http://www.w3.org/2000/01/rdf-schema#label> ?label . 
    FILTER( lang(?label) = "en") . 
    ?s <http://dbpedia.org/ontology/abstract> ?abstract . 
    FILTER( lang(?abstract) = "en") .
} LIMIT 5000

Entity feature extraction with Similarity Graph

Apart from semantic matching of entities from DBpedia, you can also use Sematch to extract features of entities and apply semantic similarity analysis using graph-based ranking algorithms. Given a list of objects (concepts, words, entities), Sematch compute their pairwise semantic similarity and generate similarity graph where nodes denote objects and edges denote similarity scores. An example of using similarity graph for extracting important words from an entity description.

from sematch.semantic.graph import SimGraph
from sematch.semantic.similarity import WordNetSimilarity
from sematch.nlp import Extraction, word_process
from sematch.semantic.sparql import EntityFeatures
from collections import Counter
tom = EntityFeatures().features('http://dbpedia.org/resource/Tom_Cruise')
words = Extraction().extract_nouns(tom['abstract'])
words = word_process(words)
wns = WordNetSimilarity()
word_graph = SimGraph(words, wns.word_similarity)
word_scores = word_graph.page_rank()
words, scores =zip(*Counter(word_scores).most_common(10))
print words
(u'picture', u'action', u'number', u'film', u'post', u'sport', 
u'program', u'men', u'performance', u'motion')

Publications


Support

You can post bug reports and feature requests in Github issues. Make sure to read our guidelines first. This project is still under active development approaching to its goals. The project is mainly maintained by Ganggao Zhu. You can contact him via gzhu [at] dit.upm.es


Why this name, Sematch and Logo?

The name of Sematch is composed based on Spanish "se" and English "match". It is also the abbreviation of semantic matching because semantic similarity metrics helps to determine semantic distance of concepts, words, entities, instead of exact matching.

The logo of Sematch is based on Chinese Yin and Yang which is written in I Ching. Somehow, it correlates to 0 and 1 in computer science.

GSI Logo

sematch's People

Contributors

balkian avatar cif2cif avatar dhimmel avatar hopple avatar ishijo avatar patvdleer avatar rami3l avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sematch's Issues

Python 3 compatibility

There are several changes to be made in order to make sematch compatible with Python 3. We are working on the on a separate branch: py3compat.

SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

Run this:

from sematch.semantic.similarity import WordNetSimilarity
wns = WordNetSimilarity()

print(wns.word_similarity('dog', 'cat', 'li'))

It returned errors:

Traceback (most recent call last):
  File "d:\resume-parsing-private\sematch_test.py", line 1, in <module>
    from sematch.semantic.similarity import WordNetSimilarity
  File "C:\Users\sin-yee.teoh\AppData\Local\Programs\Python\Python310\lib\site-packages\sematch\semantic\similarity.py", line 25, in <module>
    from sematch.semantic.sparql import EntityFeatures, StatSPARQL
  File "C:\Users\sin-yee.teoh\AppData\Local\Programs\Python\Python310\lib\site-packages\sematch\semantic\sparql.py", line 36
    print query
    ^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?

There was a problem install in Windows 10

E:\workfile\DrugBank>pip install sematch==1.0.4
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple/
Collecting sematch==1.0.4
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/f4/1a/09377bdde1fcf4ede770c631e50199511a07921cf11dc66d3a83f2514277/sematch-1.0.4.tar.gz (9.0 MB)
ERROR: Command errored out with exit status 1:
command: 'c:\users\administrator\appdata\local\programs\python\python36\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\ADMINI1\AppData\Local\Te
mp\pip-install-rea4rmch\sematch\setup.py'"'"'; file='"'"'C:\Users\ADMINI
1\AppData\Local\Temp\pip-install-rea4rmch\sematch\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"
', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\ADMINI1\AppData\Local\Temp
\pip-install-rea4rmch\sematch\pip-egg-info'
cwd: C:\Users\ADMINI
1\AppData\Local\Temp\pip-install-rea4rmch\sematch
Complete output (5 lines):
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\ADMINI~1\AppData\Local\Temp\pip-install-rea4rmch\sematch\setup.py", line 7, in
long_description = open('README.md').read(),
UnicodeDecodeError: 'gbk' codec can't decode byte 0x97 in position 4162: illegal multibyte sequence
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

I know that's because python3 and python2 have different versions, causing “open('README.md').read(),” problems

OWL error in sparql.py

i have upgraded rdflib
but still it gives import error for OWL
in line
from rdflib import rdf,rdfs,OWL

Setup fails even if requirements already exists

When trying to run using pip install, setup fails during scipy installation, i already have scipy installation but install doesn't detect that because it is predefined to install a specific version.

I was able to fix this by downloading the whole setup and changing requirement.txt and install requires file to >= for each of the requirements.

AttributeError: module 'collections' has no attribute 'Hashable'

After attempting to use just your basic examples I was presented with the following Attribute Error:
AttributeError: module 'collections' has no attribute 'Hashable'

I am using Python 3.11.0. After taking a look at the source code it appears collections has updated their implementations and structure and the Hashable attribute is now located in "collections.abc.Hashable"

The error can be fixed by updating the following line

if not isinstance(args, collections.Hashable):

with -
if not isinstance(args, collections.abc.Hashable):

How do I use sematch?

Q1: I need to calculate the similarity of other concepts. How do I use it? How do I generate this file(dbpedia_type_ic.txt')?
'''
#%% Computing semantic similarity of DBpedia concepts. from sematch.semantic.graph import DBpediaDataTransform, Taxonomy from sematch.semantic.similarity import ConceptSimilarity concept = ConceptSimilarity(Taxonomy(DBpediaDataTransform()),'models/dbpedia_type_ic.txt') concept.name2concept('actor') print(concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'wpath'))

Q2: Run the following code, there is this error, how to solve?
ConnectionResetError: [Errno 104] Connection reset by peer
#%% Computing semantic similarity of DBpedia entities. from sematch.semantic.similarity import EntitySimilarity sim = EntitySimilarity() sim.similarity('http://dbpedia.org/resource/Madrid','http://dbpedia.org/resource/Barcelona') #0.409923677282 sim.similarity('http://dbpedia.org/resource/Apple_Inc.','http://dbpedia.org/resource/Steve_Jobs')#0.0904545454545 sim.relatedness('http://dbpedia.org/resource/Madrid','http://dbpedia.org/resource/Barcelona')#0.457984139871 sim.relatedness('http://dbpedia.org/resource/Apple_Inc.','http://dbpedia.org/resource/Steve_Jobs')#0.465991132787

Getting error on FeatureExtractor

Hi, I am getting below error in FeatureExtractor. Can you please help me to fix the issue.

for key, group in itertools.groupby(chunks, lambda (word, pos, chunk): chunk != 'O') if key]
^
SyntaxError: invalid syntax

'str' has no no attribute 'decode'

I'm currently in Python3, which I believe isn't compatible with sematch right now (it's a bit unclear to me). I'm trying to run this code from the tutorial in Python3:

from sematch.semantic.similarity import WordNetSimilarity
wns = WordNetSimilarity()
wns.crossl_word_similarity('perro', 'cat', 'spa', 'eng', 'res')

I first got an error about a print statement, which I quickly fixed on line 36 in /Users/lesleycordero/anaconda3/lib/python3.6/site-packages/sematch/semantic/sparql.py

But now i'm stuck on this error:

AttributeError                            Traceback (most recent call last)
<ipython-input-35-1a68728fda46> in <module>()
----> 1 wns.crossl_word_similarity('perro', 'cat', 'spa', 'eng', 'res')
      2 

~/anaconda3/lib/python3.6/site-packages/sematch/utility.py in __call__(self, *args)
     77          return self.cache[args]
     78       else:
---> 79          value = self.func(*args)
     80          self.cache[args] = value
     81          return value

~/anaconda3/lib/python3.6/site-packages/sematch/semantic/similarity.py in crossl_word_similarity(self, w1, w2, lang1, lang2, name)
    357         :return: semantic similarity score
    358         """
--> 359         s1 = self.multilingual2synset(w1, lang1)
    360         s2 = self.multilingual2synset(w2, lang2)
    361         sim_metric = lambda x, y: self.similarity(x, y, name)

~/anaconda3/lib/python3.6/site-packages/sematch/semantic/similarity.py in multilingual2synset(self, word, lang)
    283         :return: wordnet synsets.
    284         """
--> 285         return wn.synsets(word.decode('utf-8'), lang=lang, pos=wn.NOUN)
    286 
    287 

AttributeError: 'str' object has no attribute 'decode'

I tried the following, but got the same error:

  1. taking out the .decode('utf-8') on line 285.
  2. casting word on 285 with unicode()
  3. changing word on 285 to word.encode().decode()
  4. changing w1 and w2 on lines 359 and 360 to w1.encode() and w2.encode()

(I tried this as per the instructions here.

Unfortunately still no luck.

ModuleNotFoundError: No module named 'sematch.semantic'; 'sematch' is not a package

Sematch 1.0.4 doesn't work for me (Python 3.6 on Windows x64, and Python 3.8.3 Arch Linux).

Minimal example:

from sematch.semantic.similarity import WordNetSimilarity

wns = WordNetSimilarity()

# Computing English word similarity using Li method
wns.word_similarity('dog', 'cat', 'li')  # 0.449327301063
# Computing Spanish word similarity using Lin method
wns.monol_word_similarity('perro', 'gato', 'spa', 'lin')  # 0.876800984373
# Computing Chinese word similarity using  Wu & Palmer method
wns.monol_word_similarity('狗', '猫', 'cmn', 'wup')  # 0.857142857143
# Computing Spanish and English word similarity using Resnik method
wns.crossl_word_similarity('perro', 'cat', 'spa', 'eng', 'res')  # 7.91166650904
# Computing Spanish and Chinese word similarity using Jiang & Conrad method
wns.crossl_word_similarity('perro', '猫', 'spa', 'cmn', 'jcn')  # 0.31023804699
# Computing Chinese and English word similarity using WPath method
wns.crossl_word_similarity('狗', 'cat', 'cmn', 'eng', 'wpath')  # 0.593666388463

Gives the error:

Traceback (most recent call last):
  File "D:/Github/foo/source/backend_prototyp/models/sematch_minimal_example.py", line 1, in <module>
    from sematch.semantic.similarity import WordNetSimilarity
  File "D:\Github\foo\source\backend_prototyp\models\sematch.py", line 2, in <module>
    from sematch.semantic.similarity import WordNetSimilarity
ModuleNotFoundError: No module named 'sematch.semantic'; 'sematch' is not a package

Process finished with exit code 1

Path between entities

Hello,
Is it possible to get hiererchical path distance between entities?

For example,
Madrid -> Cities in spain -> Barcelon (distance = 1)
Madria -> Cities in spain -> Cities in europe -> Rome (distance = 2)

Thanks

I need some help ,thks

when I computed the word similarity, I used the test code in introduction. as follows

from sematch.semantic.similarity import WordNetSimilarity
wns = WordNetSimilarity()

Computing English word similarity using Li method

wns.word_similarity('dog', 'cat', 'li') # 0.449327301063

Computing Spanish word similarity using Lin method

wns.monol_word_similarity('perro', 'gato', 'spa', 'lin') #0.876800984373

Computing Chinese word similarity using Wu & Palmer method

wns.monol_word_similarity('狗', '猫', 'cmn', 'wup') # 0.857142857143

Computing Spanish and English word similarity using Resnik method

wns.crossl_word_similarity('perro', 'cat', 'spa', 'eng', 'res') #7.91166650904

Computing Spanish and Chinese word similarity using Jiang & Conrad method

wns.crossl_word_similarity('perro', '猫', 'spa', 'cmn', 'jcn') #0.31023804699

Computing Chinese and English word similarity using WPath method

wns.crossl_word_similarity('狗', 'cat', 'cmn', 'eng', 'wpath')#0.593666388463

but I cant get the result . as follows:

E:\Anaconda3\envs\Python27\python.exe E:/Python_Workspace/Python_Sematch_Workspace/Sematch_Test.py
[nltk_data] Downloading package wordnet_ic to E:\nltk_data...
[nltk_data] Package wordnet_ic is already up-to-date!
0.449327301063
Traceback (most recent call last):
File "E:/Python_Workspace/Python_Sematch_Workspace/Sematch_Test.py", line 10, in
wns.monol_word_similarity('perro', 'gato', 'spa', 'lin') #0.876800984373
File "E:\Anaconda3\envs\Python27\lib\site-packages\sematch-1.0.4-py2.7.egg\sematch\utility.py", line 79, in call
value = self.func(*args)
File "E:\Anaconda3\envs\Python27\lib\site-packages\sematch-1.0.4-py2.7.egg\sematch\semantic\similarity.py", line 363, in monol_word_similarity
s1 = self.multilingual2synset(w1, lang)
File "E:\Anaconda3\envs\Python27\lib\site-packages\sematch-1.0.4-py2.7.egg\sematch\semantic\similarity.py", line 293, in multilingual2synset
return wn.synsets(word.decode('utf-8'), lang=lang, pos=wn.NOUN)
File "E:\Anaconda3\envs\Python27\lib\site-packages\nltk-3.2.5-py2.7.egg\nltk\corpus\reader\wordnet.py", line 1502, in synsets
self._load_lang_data(lang)
File "E:\Anaconda3\envs\Python27\lib\site-packages\nltk-3.2.5-py2.7.egg\nltk\corpus\reader\wordnet.py", line 1136, in _load_lang_data
if lang not in self.langs():
File "E:\Anaconda3\envs\Python27\lib\site-packages\nltk-3.2.5-py2.7.egg\nltk\corpus\reader\wordnet.py", line 1147, in langs
fileids = self._omw_reader.fileids()
File "E:\Anaconda3\envs\Python27\lib\site-packages\nltk-3.2.5-py2.7.egg\nltk\corpus\util.py", line 116, in getattr
self.__load()
File "E:\Anaconda3\envs\Python27\lib\site-packages\nltk-3.2.5-py2.7.egg\nltk\corpus\util.py", line 81, in __load
except LookupError: raise e
LookupError:


Resource omw not found.
Please use the NLTK Downloader to obtain the resource:

import nltk
nltk.download('omw')

Searched in:
- 'C:\Users\Administrator/nltk_data'
- 'C:\nltk_data'
- 'D:\nltk_data'
- 'E:\nltk_data'
- 'E:\Anaconda3\envs\Python27\nltk_data'
- 'E:\Anaconda3\envs\Python27\lib\nltk_data'
- 'C:\Users\Administrator\AppData\Roaming\nltk_data'


could you tell me how to resolve this problem , i have already download the dataset, thanks .

Missing parentheses in call to 'print'. Did you mean print(print query)?

Hi,
I have installed and configured the library as you have instructed in the documentation. But I get this error:

from sematch.semantic.similarity import WordNetSimilarity
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Shahbaz\AppData\Local\Continuum\anaconda3\lib\site-packages\sematch\semantic\similarity.py", line 25, in
from sematch.semantic.sparql import EntityFeatures, StatSPARQL
File "C:\Users\Shahbaz\AppData\Local\Continuum\anaconda3\lib\site-packages\sematch\semantic\sparql.py", line 36
print query
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(print query)?

ERROR: object of type 'map' has no len()

Hi, I'm a student that follows your research.
I try to execute an example posted on GitHub page.

from sematch.semantic.graph import DBpediaDataTransform, Taxonomy from sematch.semantic.similarity import ConceptSimilarity import nltk nltk.download('wordnet') concept = ConceptSimilarity(Taxonomy(DBpediaDataTransform()),'models/dbpedia_type_ic.txt') res1=concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'path') res2=concept.similarity('http://dbpedia.org/ontology/Actor','http://dbpedia.org/ontology/Film', 'wup') print(res1,res2)

And I have the following problem:

File "/anaconda3/lib/python3.7/site-packages/sematch/semantic/graph.py", line 68, in __init__ self._root = len(self._nodes) + 1 TypeError: object of type 'map' has no len()

How can I do?

socket.error: [Errno 104] Connection reset by peer when Computing semantic similarity of DBpedia entities

Hello, I am an undergraduate student and following your research.

When I comput semantic similarity of DBpedia entities using the example codes as follows:

from sematch.semantic.similarity import EntitySimilarity
sim = EntitySimilarity()
sim.similarity('http://dbpedia.org/resource/Madrid','http://dbpedia.org/resource/Barcelona') 

I get the following error:

socket.error: [Errno 104] Connection reset by peer

And the complete error infomations are here:

Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python2.7/site-packages/sematch/semantic/similarity.py", line 532, in similarity
concepts_1 = self._features.type(entity1)
File "/usr/lib/python2.7/site-packages/sematch/semantic/sparql.py", line 181, in type
return self.resource_query(*self.sp_triple(entity, RDF.type, 'o'))
File "/usr/lib/python2.7/site-packages/sematch/semantic/sparql.py", line 55, in resource_query
return self.execution_template(variable, self.q_mark(variable), triples, self._tpl, show_query)
File "/usr/lib/python2.7/site-packages/sematch/semantic/sparql.py", line 47, in execution_template
return [r[variable]["value"] for r in self.execution(template % (query, triples), show_query)]
File "/usr/lib/python2.7/site-packages/sematch/semantic/sparql.py", line 38, in execution
results = self._sparql.query().convert()
File "/usr/lib/python2.7/site-packages/SPARQLWrapper/Wrapper.py", line 601, in query
return QueryResult(self._query())
File "/usr/lib/python2.7/site-packages/SPARQLWrapper/Wrapper.py", line 571, in _query
response = urlopener(request)
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1201, in do_open
r = h.getresponse(buffering=True)
File "/usr/lib/python2.7/httplib.py", line 1121, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 438, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 394, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)

But when I use "http://dbpedia.org/resource/Madrid" in SPARQL Explorer is OK, could you please tell me why this happen?

Problem with sparkql.py - Installing Using Pip

I've installed sematch both on my local system and Google Colab. There is a problem with the sparkql.py file located in https://github.com/gsi-upm/sematch/sematch/semantic/ when I'm installing it using pip.

I've checked this file in this repository and it's all fine:

but in the installed files in my local system and Google Colab, I found this:

print query

This will lead to a SyntaxError for missing parentheses.

Extension to other POS Taxonomies Beyond Nouns

First off, thank you for building sematch! This package has been incredibly valuable for me.

Suggestion / question - is there any reason why WordNetSimilarity is restricted to only nouns at the moment? I noticed that synsets seem to be restricted to nouns only, but WordNet includes verb taxonomies also.

Relevant code:

nltk.corpus.wordnet.synsets doesn't require the POS argument to be passed in (it defaults to nltk.corpus.wordnet.POS_LIST, so I think a potentially nice extension would be to remove the restriction on measuring similarity between nouns only

Error when running sematch

Hello,

My name is Raul and I am following the first step in the tutorial in order to use sematch but I kept getting the following error:

from sematch.semantic.similarity import WordNetSimilarity

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.5/dist-packages/sematch/semantic/similarity.py", line 25, in
from sematch.semantic.sparql import EntityFeatures, StatSPARQL
File "/usr/local/lib/python3.5/dist-packages/sematch/semantic/sparql.py", line 36
print query
^
SyntaxError: Missing parentheses in call to 'print'

Can you help me to fix this problem?

AttributeError: 'str' object has no attribute 'decode'

this method raise an exception in python3


class WordNetSimilarity:
def multilingual2synset(self, word, lang='spa'):
"""
Map words in different language to wordnet synsets
['als', 'arb', 'cat', 'cmn', 'dan', 'eng', 'eus', 'fas', 'fin', 'fra', 'fre',
'glg', 'heb', 'ind', 'ita', 'jpn', 'nno','nob', 'pol', 'por', 'spa', 'tha', 'zsm']
:param word: a word in different language that has been defined in
Open Multilingual WordNet, using ISO-639 language codes.
:param lang: the language code defined
:return: wordnet synsets.
"""
return wn.synsets(word.decode('utf-8'), lang=lang, pos=wn.NOUN)

Similarity for neural

For the below word, the similarity is 0.

wns.word_similarity('neural','neural', 'li')

Can anyone explain me why this is happening? Thanks

pls help install sematch on windows 10 computer

I try to install by pip install sematch
but there is very long error mesage

Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.

C:\Users\cde3>pip install sematch
Collecting sematch
Collecting SPARQLWrapper==1.5.2 (from sematch)
Requirement already satisfied: scikit-learn==0.17.1 in c:\users\cde3\anaconda3\lib\site-packages (from sematch)
Requirement already satisfied: networkx==1.11 in c:\users\cde3\anaconda3\lib\site-packages (from sematch)
Collecting scipy==0.13.2 (from sematch)
Using cached scipy-0.13.2.zip
Requirement already satisfied: numpy==1.11.0 in c:\users\cde3\anaconda3\lib\site-packages (from sematch)
Collecting nltk==3.2 (from sematch)
Collecting rdflib==4.0.1 (from sematch)
Requirement already satisfied: decorator>=3.4.0 in c:\users\cde3\anaconda3\lib\site-packages (from networkx==1.11->sematch)
Requirement already satisfied: pyparsing in c:\users\cde3\anaconda3\lib\site-packages (from rdflib==4.0.1->sematch)
Collecting isodate (from rdflib==4.0.1->sematch)
Building wheels for collected packages: scipy
Running setup.py bdist_wheel for scipy ... error
Complete output from command c:\users\cde3\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d C:\Users\cde3\AppData\Local\Temp\tmpa_l6g0k4pip-wheel- --python-tag cp35:
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['c:\users\cde3\anaconda3\lib', 'C:\', 'c:\users\cde3\anaconda3\libs']
NOT AVAILABLE

openblas_info:
libraries openblas not found in ['c:\users\cde3\anaconda3\lib', 'C:\', 'c:\users\cde3\anaconda3\libs']
NOT AVAILABLE

atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:633: UserWarning: Specified path C:\projects\windows-wheel-builder\atlas-builds\atlas-3.11.38-sse2-64\lib is invalid.
warnings.warn('Specified path %s is invalid.' % d)
libraries numpy-atlas not found in []
NOT AVAILABLE

atlas_3_10_blas_info:
libraries numpy-atlas not found in []
NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
libraries numpy-atlas not found in []
NOT AVAILABLE

atlas_blas_info:
libraries numpy-atlas not found in []
NOT AVAILABLE

c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:1640: UserWarning:
Atlas (http://math-atlas.sourceforge.net/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [atlas]) or by setting
the ATLAS environment variable.
warnings.warn(AtlasNotFoundError.doc)
blas_info:
libraries blas not found in ['c:\users\cde3\anaconda3\lib', 'C:\', 'c:\users\cde3\anaconda3\libs']
NOT AVAILABLE

c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:1649: UserWarning:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
warnings.warn(BlasNotFoundError.doc)
blas_src_info:
NOT AVAILABLE

c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:1652: UserWarning:
Blas (http://www.netlib.org/blas/) sources not found.
Directories to search for the sources can be specified in the
numpy/distutils/site.cfg file (section [blas_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.doc)
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py", line 230, in
setup_package()
File "C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py", line 227, in setup_package
setup(**metadata)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\core.py", line 135, in setup
config = configuration()
File "C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py", line 170, in configuration
config.add_subpackage('scipy')
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\setup.py", line 12, in configuration
config.add_subpackage('integrate')
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
caller_level = 2)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
caller_level = caller_level + 1)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "scipy\integrate\setup.py", line 12, in configuration
blas_opt = get_info('blas_opt',notfound_action=2)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py", line 372, in get_info
return cl().get_info(notfound_action)
File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py", line 566, in get_info
raise self.notfounderror(self.notfounderror.doc)
numpy.distutils.system_info.BlasNotFoundError:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.


Failed building wheel for scipy
Running setup.py clean for scipy
Failed to build scipy
Installing collected packages: isodate, rdflib, SPARQLWrapper, scipy, nltk, sematch
Found existing installation: scipy 0.18.1
Uninstalling scipy-0.18.1:
Successfully uninstalled scipy-0.18.1
Running setup.py install for scipy ... error
Complete output from command c:\users\cde3\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\cde3\AppData\Local\Temp\pip-ql2jonbq-record\install-record.txt --single-version-externally-managed --compile:
blas_opt_info:
blas_mkl_info:
libraries mkl,vml,guide not found in ['c:\users\cde3\anaconda3\lib', 'C:\', 'c:\users\cde3\anaconda3\libs']
NOT AVAILABLE

openblas_info:
  libraries openblas not found in ['c:\\users\\cde3\\anaconda3\\lib', 'C:\\', 'c:\\users\\cde3\\anaconda3\\libs']
  NOT AVAILABLE

atlas_3_10_blas_threads_info:
Setting PTATLAS=ATLAS
c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:633: UserWarning: Specified path C:\projects\windows-wheel-builder\atlas-builds\atlas-3.11.38-sse2-64\lib is invalid.
  warnings.warn('Specified path %s is invalid.' % d)
  libraries numpy-atlas not found in []
  NOT AVAILABLE

atlas_3_10_blas_info:
  libraries numpy-atlas not found in []
  NOT AVAILABLE

atlas_blas_threads_info:
Setting PTATLAS=ATLAS
  libraries numpy-atlas not found in []
  NOT AVAILABLE

atlas_blas_info:
  libraries numpy-atlas not found in []
  NOT AVAILABLE

c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:1640: UserWarning:
    Atlas (http://math-atlas.sourceforge.net/) libraries not found.
    Directories to search for the libraries can be specified in the
    numpy/distutils/site.cfg file (section [atlas]) or by setting
    the ATLAS environment variable.
  warnings.warn(AtlasNotFoundError.__doc__)
blas_info:
  libraries blas not found in ['c:\\users\\cde3\\anaconda3\\lib', 'C:\\', 'c:\\users\\cde3\\anaconda3\\libs']
  NOT AVAILABLE

c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:1649: UserWarning:
    Blas (http://www.netlib.org/blas/) libraries not found.
    Directories to search for the libraries can be specified in the
    numpy/distutils/site.cfg file (section [blas]) or by setting
    the BLAS environment variable.
  warnings.warn(BlasNotFoundError.__doc__)
blas_src_info:
  NOT AVAILABLE

c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py:1652: UserWarning:
    Blas (http://www.netlib.org/blas/) sources not found.
    Directories to search for the sources can be specified in the
    numpy/distutils/site.cfg file (section [blas_src]) or by setting
    the BLAS_SRC environment variable.
  warnings.warn(BlasSrcNotFoundError.__doc__)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py", line 230, in <module>
    setup_package()
  File "C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py", line 227, in setup_package
    setup(**metadata)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\core.py", line 135, in setup
    config = configuration()
  File "C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py", line 170, in configuration
    config.add_subpackage('scipy')
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
    caller_level = 2)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
    caller_level = caller_level + 1)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
    config = setup_module.configuration(*args)
  File "scipy\setup.py", line 12, in configuration
    config.add_subpackage('integrate')
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 1003, in add_subpackage
    caller_level = 2)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 972, in get_subpackage
    caller_level = caller_level + 1)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\misc_util.py", line 909, in _get_configuration_from_setup_py
    config = setup_module.configuration(*args)
  File "scipy\integrate\setup.py", line 12, in configuration
    blas_opt = get_info('blas_opt',notfound_action=2)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py", line 372, in get_info
    return cl().get_info(notfound_action)
  File "c:\users\cde3\anaconda3\lib\site-packages\numpy\distutils\system_info.py", line 566, in get_info
    raise self.notfounderror(self.notfounderror.__doc__)
numpy.distutils.system_info.BlasNotFoundError:
    Blas (http://www.netlib.org/blas/) libraries not found.
    Directories to search for the libraries can be specified in the
    numpy/distutils/site.cfg file (section [blas]) or by setting
    the BLAS environment variable.

----------------------------------------

Rolling back uninstall of scipy
Command "c:\users\cde3\anaconda3\python.exe -u -c "import setuptools, tokenize;file='C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\cde3\AppData\Local\Temp\pip-ql2jonbq-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\cde3\AppData\Local\Temp\pip-build-qrofeoyb\scipy\

C:\Users\cde3>

DBpedia entities relatedness don't produce the same results

Hi all, I am using Python (2.7), numpy (1.13.3), scipy (0.19.1), sematch (1.0.4).
I've been trying to reproduce the semantic similarity of DBpedia entities results in the readme, I got the same results using similarity, but lower ones using relatedness:

>>> sim.relatedness('http://dbpedia.org/resource/Madrid','http://dbpedia.org/resource/Barcelona')#0.457984139871
0.2668161233777911
>>> sim.relatedness('http://dbpedia.org/resource/Apple_Inc.','http://dbpedia.org/resource/Steve_Jobs')#0.465991132787
0.19299297377223823

Also I tried tinkering with some other entities, the results were not very logical, some of it were > 1.0 (is that even possible?) e.g:

>>> sim.relatedness('http://dbpedia.org/resource/Secure_Shell', 'http://dbpedia.org/resource/Spain')
1.1417165889528
>>> sim.relatedness('http://dbpedia.org/resource/Freeware', 'http://dbpedia.org/resource/Philippines')
1.2145251551211556

New Release

Could you please make a new release?

I need this change: #11
because of this error: #9

I can't work with sematch until you release this change with a new version release.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.