Giter Site home page Giter Site logo

tener's Introduction

TENER: Adapting Transformer Encoder for Named Entity Recognition

This is the code for the paper TENER.

TENER (Transformer Encoder for Named Entity Recognition) is a Transformer-based model which aims to tackle the NER task. Compared with the naive Transformer, we found relative position embedding is quite important in the NER task. Experiments in the English and Chinese NER datasets prove the effectiveness.

Requirements

This project needs the natural language processing python package fastNLP. You can install by the following command

pip install fastNLP

Run the code

(1) Prepare the English dataset.

Conll2003

Your file should like the following (The first token in a line is the word, the last token is the NER tag.)

LONDON NNP B-NP B-LOC
1996-08-30 CD I-NP O

West NNP B-NP B-MISC
Indian NNP I-NP I-MISC
all-rounder NN I-NP O
Phil NNP I-NP B-PER

OntoNotes

Suggest to use the following code to prepare your data OntoNotes-5.0-NER. Or you can prepare data like the Conll2003 style, and then replace the OntoNotesNERPipe with Conll2003NERPipe in the code.

For English datasets, we use the Glove 100d pretrained embedding. FastNLP will download it automatically.

You can use the following code to run (make sure you have changed the data path)

python train_tener_en.py --dataset conll2003

or

python train_tener_en.py --dataset en-ontonotes

Although we tried hard to make sure you can reproduce our results, the results may still disappoint you. This is usually caused by the best dev performance does not correlate well with the test performance . Several runs should be helpful.

The ELMo version (FastNLP will download ELMo weights automatically, you just need to change the data path in train_elmo_en.)

python train_elmo_en.py --dataset en-ontonotes
MSRA, OntoNotes4.0, Weibo, Resume

Your data should only have two columns, the first is the character, the second is the tag, like the following

口 O
腔 O
溃 O
疡 O
加 O
上 O

For the Chinese datasets, you can download the pretrained unigram and bigram embeddings in Baidu Cloud. Download the 'gigaword_chn.all.a2b.uni.iter50.vec' and 'gigaword_chn.all.a2b.bi.iter50.vec'. Then replace the embedding path in train_tener_cn.py

You can run the code by the following command

python train_tener_cn.py --dataset ontonotes

tener's People

Contributors

yhcc avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.