Giter Site home page Giter Site logo

asingh9530 / grammar Goto Github PK

View Code? Open in Web Editor NEW

This project forked from allenschmaltz/grammar

0.0 1.0 0.0 902 KB

Dockerfile 0.02% Makefile 0.08% Shell 2.29% Jupyter Notebook 1.55% Python 43.25% TeX 10.80% Perl 4.97% Smalltalk 0.22% Emacs Lisp 2.03% JavaScript 0.10% NewLisp 0.19% Ruby 0.20% Slash 0.03% SystemVerilog 0.02% Lua 34.25%

grammar's Introduction

grammar

Under development.

A trained model (trained on the NUCLE + Lang-8 data) is available here (in the folder w_n_w_l8_pyrep_s79_t100_rnn750_la2_v50000_brnn_opennmt_b48):

https://drive.google.com/drive/folders/1Dsdp4Pgtfm-_MW5tOnQ26OkJuljnyAL7?usp=sharing

This is for use with the code in https://github.com/allenschmaltz/grammar/tree/master/code/_dev/constrained, which implements constrained decoding (i.e., changes not conforming to the diff tag semantics are not allowed during beam search). An example of decoding the CoNLL dev set appears in example_run.sh.

For reference, the effectiveness on the CoNLL dev/test data is slightly higher than that of the original model used in the EMNLP paper. (This is due to the model and not due to changes in decoding.) Results for unconstrained and constrained decoding are included below (and are not significantly different):

Unconstrained decoding followed by a post-hoc fix of the tags:

Dev:

Precision: 0.49; Recall: 0.15; F_0.5: 0.34

Test:

Precision: 0.54; Recall: 0.24; F_0.5: 0.43

Constrained decoding:

Dev:

Precision: 0.49; Recall: 0.15; F_0.5: 0.34

Test:

Precision: 0.53; Recall: 0.24; F_0.5: 0.43

For historical purposes, the model used in the original paper (for use with the original code) is available at the above link in the folder legacy/word_nucle_w_lang8_v1_srclen79_trglen100_rnn750_la2_v50000_brnn_opennmt_batchsize48.

License (for trained models)

The models linked above are provided solely for research purposes and are provided as-is without warranty of any kind. They were trained with the NUCLE and Lang-8 data (see references in the paper cited below) and usage must conform to the original licenses of those data sources.

Citation/Reference

Allen Schmaltz, Yoon Kim, Alexander Rush, and Stuart Shieber. 2017. Adapting sequence models for sentence correction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 2807-2813. https://www.aclweb.org/anthology/D17-1298. (Appendix) (.bib)

grammar's People

Contributors

allenschmaltz avatar

Watchers

Abhinav Singh avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.