Giter Site home page Giter Site logo

bag's Introduction

BAG

Implementation for NAACL-2019 paper

BAG: Bi-directional Attention Entity Graph Convolutional Network forMulti-hop Reasoning Question Answering

paper link

BAG Framework

Requirement

We provided main entrance in both TensorFlow version (BAG.py) and PyTorch version (BAG-pytorch.py)

  1. Python 3.6
  2. TensorFlow == 1.11.0 (if you want to run TF version script, We are not sure if it works at higher version)
  3. Pytorch >= 1.1.0
  4. SpaCy >= 2.0.12 (You need to install "en" module via "python -m spacy download en")
  5. allennlp >= 0.7.1
  6. nltk >= 3.3
  7. pytorch-ignite (if your need to run the script in PyTorch version)

And some other packages

I run it using two NVIDIA GTX1080Ti GPUs each one has 11GB memory. To run it with default batch size 32, at least 16GB GPU memory is needed. To run the preprocessing procedure on the whole dataset, at least 50GB system memory is needed.

How to run

  • Before run

You need to download pretrained 840B 300d GLoVe embeddings, and pretrained original size ELMo embedding weights and options and put them under directory /data.

  • Preprocessing dataset

You need to download QAngaroo WIKIHOP dataset , unzip it and put the json files under the root directory. Then run prerpocessing script

python prepro.py {json_file_name}

It will generate four preprocessed pickle file in the root directory which will be used in the training or prediction.

  • Train the model

Train the model using following command which will follow the configure in original paper

python BAG.py {train_json_file_name} {dev_json_file_name} --use_multi_gpu=true

or in pytorch version (we do not provide multi-gpu support yet for pytorch, the most simple way is wrapping model with nn.DataParallel)

python BAG-pytorch.py {train_json_file_name} {dev_json_file_name}

Please make sure you have run preprocessing for both train file and dev file before training. And please make sure you have CUDA0 and CUDA1 available. If you have single GPU with more than 16GB memory, you can remove parameter --use_multi_gpu.

  • Predict

After training it will put trained model onto directory /models. You can predict the answer of a json file using following command

python BAG.py {predict_json_file_name} {predict_json_file_name} --use_multi_gpu=true --evaluation_mode=true

  • Trained model

Anyone who needs the trained model in our submission can find it on the Codalab (Only TF version is available)

Acknowledgement

We would like to appreciate Nicola De Cao link for his assistance in implementing this project.

Reference

@inproceedings{cao2019bag,
  title={BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering},
  author={Cao, Yu and Fang, Meng and Tao, Dacheng},
  booktitle={Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
  pages={357--362},
  year={2019}
}

bag's People

Contributors

caoyu-noob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

bag's Issues

BiAttention does not use the hidden nodes representation from GCN

In line 154 of BAG-pytorch.py, BiAttention takes hidden_nodes as an input:
attention_output = self.bi_attention(nodes_compress, query_compress, nodes_hidden)
however in line 110-122 the IDE shows that the param nodes_hidden is not used:

def forward(self, nodes_compress, query_compress, nodes_hidden):
    query_size = query_compress.shape[1]
    expanded_query = query_compress.unsqueeze(1).repeat((1, self.max_nodes, 1, 1))
    expanded_nodes = nodes_compress.unsqueeze(2).repeat((1, 1, query_size, 1))
    nodes_query_similarity = expanded_nodes * expanded_query
    concatenated_data = torch.cat((expanded_nodes, expanded_query, nodes_query_similarity), -1)
    similarity = self.attention_linear(concatenated_data).squeeze(-1)
    nodes2query = torch.matmul(F.softmax(similarity, dim=-1), query_compress)
    b = F.softmax(torch.max(similarity, dim=-1)[0], dim=-1)
    query2nodes = torch.matmul(b.unsqueeze(1), nodes_compress).repeat(1, self.max_nodes, 1)
    attention_output = torch.cat(
        (nodes_compress, nodes2query, nodes_compress * nodes2query, nodes_compress * query2nodes), dim=-1)
    return attention_output

and it seems that query_compress(features) is taken as node representation instead of GCN output. I do not know if I missed something, but this will make the GCN layers redundent.

Missing Edge Type

Hi!

Did you conduct any studies on using the exact edges defined in Cao's Question answering by reasoning across documents with graph convolutional networks. It seems like you are missing one edge-type? (Also, coreference resolved nodes) Any reason why?

Thanks,
Ronak

train time

excuse me, I saw in the paper that you said it took 14 hours to train the model, but I spent more than 70 hours. I would like to ask for your advice.Thank you.

Relations in GCNLayer

Hi, it is very impressive work! I am studying R-GCN. Could you please tell me in which part do you integrate edge information with nodes in the "GCNLayer" code?

I can't run the code of pytorch version

Hello, I met a problem about memory when I try to run your ‘BAG-pytorch.py’
when the code reads data, The code processes the dev.json first, then the train.json.
However, after the code processes the dev.json, that occupy the 90g memory(not GPU). my server only has 120g memory. Therefore, when the code processes the train.json, it runs out of memory.
It can't train at all. I would like to ask for your advice. Thank you.

the image of the result is as follows.
image

"已杀死" means "has killed".

Usage of nodes_mask

Hi, thanks for the code in PyTorch implement. It really helped me read the framework easier.
But I have a question about the meanings of nodes_mask in GCNLayer. Is it used for normalization or something else? Could you please introduce it more specifically?

And I noticed that there are 3 linear layers when args.use_edge was set to be True. Could you please tell me how they were used to make two kinds of edges effective?

Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.