Giter Site home page Giter Site logo

neural-lp's People

Contributors

fanyangxyz avatar kunal017 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-lp's Issues

Small edits to instructions mentioned

Hi

Thank you for this code.

I think these edits would help people running the code:

.eval/collect_all_facts.sh datasets/family
python eval/get_truths.py datasets/family
python eval/evaluate.py --preds=exps/family/test_predictions.py --truths=datasets/family/truths.pckl

u_{T+1}: Final step in the recurrent formulation

@fanyangxyz : In the code starting at:

for t in xrange(self.num_step):
, I do not see the final step in the recurrent formulation, namely, Eq 8 in your NIPS paper.

I am a little confused in this part. Could you please explain ?

Further, I assume https://github.com/fanyangxyz/Neural-LP/blob/master/src/model.py#L149 is Eq 11 in the paper. However, the indices seem to not match the equation. In the equation, you have softmax([h_0 .. h_{t-1}]^T h_t) but the code seems to suggest softmax([h_0 .. h_{t}]^T h_t)

It would be really great if you could explain this discrepancy. Thanks

Inductive Datasets ?

Hi,
Nice work!
Can I know where the inductive dataset is located? Or the exact parameters (random sample of test entities etc ) required to build the inductive dataset as mentioned in the paper ?

Thanks

Negative facts coverage

Hi @fanyangxyz ,

Can Neural-LP be modelled to generate the set of rules in the presence of incorrect and negative facts in the database? Basically, the rules should cover as many facts as possible from the positive (facts.txt) and as little negatives as possible from a negatives.txt

Running problem

I tried to run the code as the readme suggested. I got the following error:

Traceback (most recent call last):
File "src/main.py", line 6, in
from model import Learner
File "~/Neural-LP-master/src/model.py", line 207
lambda (grad, var): self._clip_if_not_None(grad, var, -5., 5.), gvs)
^
SyntaxError: invalid syntax

question about formula (2) and (5)

Hi @fanyangxyz ,

I am confused why formula (2) can be converted into to formula (5). It seems like these two formulae are not equivalent and there is no explanation in paper. Would you mind tell me more details about this?

Question answering task

I was wondering how you performed QA task explained in Section 4.4 of your paper.
I could process the dataset to match the input format of Neural LP and I could produce rules for the dataset in the exps/demo/directory, but I do not know what to do next. Could you please elaborate more on this task?

Grid path finding

which dataset was used in Section 4.2 "Grid path finding"? I can't find the code of this task in your dir, could u give some more details? thank u~

Question about table 6

I want to know the fairness of this experiment.
The article describes how to handle test sets and training sets. But Neural-LP reasons based on known facts and rules. The facts is splited from the initial train data and contains tuples share entities with selected test tuples.
In order to ensure fairness, different models should use the same data. I want to know the data you use to train TransE because I think it is not the union of facts and train for Neural-LP. What's your interpretation of the fairness of this experiment?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.