khundman / marve Goto Github PK
View Code? Open in Web Editor NEWFor extracting measurements and related entities from text
License: Apache License 2.0
For extracting measurements and related entities from text
License: Apache License 2.0
I am getting a JSONDecodeError : Expecting value: line 1 column 1 (char 0) . CoreNLP and Grobid are running fine .
The code i have written is
import Measurements as m
import json
test = "The patient returned to Europe at 28 weeks of gestation."
coreNLP = "http://localhost:9000"
grobid = "http://localhost:8070/service"
patterns = "dependency_patterns.json"
write_to = "smaple_output.txt"
out = m.extract(test, coreNLP, grobid, patterns, write_to, show_graph=False, pretty=True)
Please help me out!
Hello,
Reading your paper, I realized you used a dataset of 489 sentences. Did you also publish the preprocessing and the dataset themselves so that others be able to reproduce your results?
Hi all,
When I run the sample.py, the short program just stuck there (I run it in a python iteractive shell). I found in the stanford corenlp server, prints the following.
It seems like a memory issue? but it is just one sentence: "The patient returned to Europe at 28 weeks of gestation." Any idea on how to solve this?
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator coref
Exception in thread "pool-1-thread-3" java.lang.OutOfMemoryError: Java heap space
at java.io.ObjectInputStream$HandleTable.grow(ObjectInputStream.java:3493)
at java.io.ObjectInputStream$HandleTable.assign(ObjectInputStream.java:3300)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1799)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at java.util.HashMap.readObject(HashMap.java:1396)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at edu.stanford.nlp.io.IOUtils.readObjectFromURLOrClasspathOrFileSystem(IOUtils.java:310)
at edu.stanford.nlp.coref.statistical.FeatureExtractor.loadVocabulary(FeatureExtractor.java:90)
at edu.stanford.nlp.coref.statistical.FeatureExtractor.(FeatureExtractor.java:75)
at edu.stanford.nlp.coref.statistical.StatisticalCorefAlgorithm.(StatisticalCorefAlgorithm.java:63)
at edu.stanford.nlp.coref.statistical.StatisticalCorefAlgorithm.(StatisticalCorefAlgorithm.java:44)
at edu.stanford.nlp.coref.CorefAlgorithm.fromProps(CorefAlgorithm.java:28)
at edu.stanford.nlp.coref.CorefSystem.(CorefSystem.java:34)
at edu.stanford.nlp.pipeline.CorefAnnotator.(CorefAnnotator.java:50)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.coref(AnnotatorImplementations.java:243)
at edu.stanford.nlp.pipeline.AnnotatorFactories$13.create(AnnotatorFactories.java:402)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:152)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:451)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:154)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.(StanfordCoreNLP.java:145)
Really intrested in your work and would like to follow it. Is it possbile to share the labeled evaluation data in the paper? So I could further test my methods. Thanks! Wrote you an e-mail few days ago.
I’ve got some problem with python and matplotlib with OSX, but I’ve managed to work around it.
I’ve noticed that coreNLP doesn’t work on my mac book with 4Gb of memory. I had to give him more (~6g) to have it running. After that I’ve run marve but I’ve got another error:
(inria-virtualenv-p2) Johan:marve lfoppiano$ frameworkpython marve/sample.py
Traceback (most recent call last):
File "marve/sample.py", line 28, in <module>
m.extract(test, coreNLP, grobid, patterns, write_to, show_graph=False, pretty=True, simplify=False)
File "/Users/lfoppiano/development/inria/inria-virtualenv-p2/lib/python2.7/site-packages/marve/Measurements.py", line 488, in extract
A = Annotations(output["sentences"][i]["tokens"], output["sentences"][i][dep_key])
KeyError: 'enhanced-plus-plus-dependencies’
I’m using CoreNLP version stanford-corenlp-full-2016-10-31
, the same as in the installation documentation.
Cheers
Luca
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.