Giter Site home page Giter Site logo

bert-commonsense_zghi's Introduction

A Surprisingly Robust Trick for Winograd Schema Challenge and WikiCREM: A Large Unsupervised Corpus for Coreference Resolution

This code contains models and experiments for the paper A Surprisingly Robust Trick for Winograd Schema Challenge and WikiCREM: A Large Unsupervised Corpus for Coreference Resolution.

The MaskedWiki datasets and pre-trained models can be downloaded from this webpage. The link contains two datasets, MaskedWiki_Sample (~2.4M examples) and MaskedWiki_Full (~130M examples). All the experiments were conducted with the MaskedWiki_Sample only.

The WikiCREM datasets and BERT_WikiCREM model can be downloaded from this webpage.

The following libraries are needed to run the code: Python 3 (version 3.6 or later), numpy (version 1.14 was used), pytorch (version 0.4.1 was used), tqdm, boto3, nltk (version 3.3 was used), requests, Spacy (version 2.0.13 was used), Spacy en_core_web_lg model.

If you evaluate any of the models on the GAP dataset, we encourage you to check out this project which addresses imbalances within its bias metric.

To evaluate BERT on all datasets, use the following script:

python main.py \
      --task_name wscr \
      --do_eval \
      --eval_batch_size 10 \
      --data_dir "data/" \
      --bert_model bert-large-uncased \
      --max_seq_length 128 \
      --output_dir model_output/

To evaluate one of the downloaded pre-trained models, use the following code:

python main.py \
      --task_name wscr \
      --do_eval \
      --eval_batch_size 10 \
      --data_dir "data/" \
      --bert_model bert-large-uncased \
      --max_seq_length 128 \
      --output_dir model_output/ \
      --load_from_file models/BERT_Wiki_WscR 

To train the BERT_Wiki model, use the code below. To reproduce the exact results from the paper, use the versions of the libraries as listed in the conda environment wsc_env.yml. Please note that re-training the models with different version of the libraries may yield different results. Running a full hyper-parameter search is recommended in this case.

python main.py \
      --task_name maskedwiki \
      --do_eval \
      --do_train \
      --eval_batch_size 10 \
      --data_dir "data/" \
      --bert_model bert-large-uncased \
      --max_seq_length 128 \
      --train_batch_size 64 \
      --alpha_param 20 \
      --beta_param 0.2 \
      --learning_rate 5.0e-6 \
      --num_train_epochs 1.0 \
      --output_dir model_output/ 

To train the BERT_Wiki_WscR model, download the MaskedWiki_sample into the data folder and BERT_Wiki model into the models folder. Then use the following code:

python main.py \
      --task_name wscr \
      --do_eval \
      --do_train \
      --eval_batch_size 10 \
      --data_dir "data/" \
      --bert_model bert-large-uncased \
      --max_seq_length 128 \
      --train_batch_size 64 \
      --learning_rate 1.0e-5 \
      --alpha_param 5 \
      --beta_param 0.2 \
      --num_train_epochs 30.0 \
      --output_dir model_output/ \
      --load_from_file models/BERT_Wiki 

References

@inproceedings{kocijan19acl,
    title     = {A Surprisingly Robust Trick for Winograd Schema Challenge},
    author    = {Vid Kocijan and
               Ana-Maria Cretu and
               Oana-Maria Camburu and
               Yordan Yordanov and
               Thomas Lukasiewicz},
    booktitle = {The 57th Annual Meeting of the Association for Computational Linguistics (ACL)},
    address = {Florence, Italy},
    month = {July},
    year = {2019}
}
@inproceedings{kocijan19emnlp,
    title     = {WikiCREM: A Large Unsupervised Corpus for Coreference Resolution},
    author    = {Vid Kocijan and
               Ana-Maria Cretu and
               Oana-Maria Camburu and
               Yordan Yordanov and
               Phil Blunsom and
               Thomas Lukasiewicz},
    booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
    address = {Hong Kong},
    month = {November},
    year = {2019}
}

bert-commonsense_zghi's People

Contributors

jacqueline-he avatar trellixvulnteam avatar vid-koci avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.