Giter Site home page Giter Site logo

yxuansu / tacl Goto Github PK

View Code? Open in Web Editor NEW
90.0 5.0 6.0 7.53 MB

[NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

Home Page: https://arxiv.org/abs/2111.04198

Python 89.11% Shell 10.89%
bert pretraining contrastive-learning nlp language-model ner text-classification

tacl's Introduction

TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

Authors: Yixuan Su, Fangyu Liu, Zaiqiao Meng, Tian Lan, Lei Shu, Ehsan Shareghi, and Nigel Collier

Code of our paper: TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning

[使用中文TaCL-BERT进行中文命名实体识别及中文分词教程]

News:

  • [2022/04/08] TaCL is accepted to NAACL 2022!
  • [2021/11/09] The first version of TaCL is released.

Introduction:

Masked language models (MLMs) such as BERT and RoBERTa have revolutionized the field of Natural Language Understanding in the past few years. However, existing pre-trained MLMs often output an anisotropic distribution of token representations that occupies a narrow subset of the entire representation space. Such token representations are not ideal, especially for tasks that demand discriminative semantic meanings of distinct tokens. In this work, we propose TaCL (Token-aware Contrastive Learning), a novel continual pre-training approach that encourages BERT to learn an isotropic and discriminative distribution of token representations. TaCL is fully unsupervised and requires no additional data. We extensively test our approach on a wide range of English and Chinese benchmarks. The results show that TaCL brings consistent and notable improvements over the original BERT model. Furthermore, we conduct detailed analysis to reveal the merits and inner-workings of our approach.

Main Results:

We show the comparison between TaCL (base version) and the original BERT (base version).

(1) English benchmark results on SQuAD (Rajpurkar et al., 2018) (dev set) and GLUE (Wang et al., 2019) average score.

Model SQuAD 1.1 (EM/F1) SQuAD 2.0 (EM/F1) GLUE Average
BERT 80.8/88.5 73.4/76.8 79.6
TaCL 81.6/89.0 74.4/77.5 81.2

(2) Chinese benchmark results (test set F1) on four NER tasks (MSRA, OntoNotes, Resume, and Weibo) and three Chinese word segmentation (CWS) tasks (PKU, CityU, and AS).

Model MSRA OntoNotes Resume Weibo PKU CityU AS
BERT 94.95 80.14 95.53 68.20 96.50 97.60 96.50
TaCL 95.44 82.42 96.45 69.54 96.75 98.16 96.75

Huggingface Models:

Model Name Model Address
English (cambridgeltl/tacl-bert-base-uncased) link
Chinese (cambridgeltl/tacl-bert-base-chinese) link

Example Usage:

import torch
# initialize model
from transformers import AutoModel, AutoTokenizer
model_name = 'cambridgeltl/tacl-bert-base-uncased'
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# create input ids
text = '[CLS] clbert is awesome. [SEP]'
tokenized_token_list = tokenizer.tokenize(text)
input_ids = torch.LongTensor(tokenizer.convert_tokens_to_ids(tokenized_token_list)).view(1, -1)
# compute hidden states
representation = model(input_ids).last_hidden_state # [1, seqlen, embed_dim]

Tutorial on how to reproduce the results in our paper:

1. Environment Setup:

python version: 3.8
pip3 install -r requirements.txt

2. Train TaCL:

(1) Prepare pre-training data:

Please refer to details provided in ./pretraining_data directory.

(2) Train the model:

Please refer to details provided in ./pretraining directory.

3. Experiments on English Benchmarks:

Please refer to details provided in ./english_benchmark directory.

4. Experiments on Chinese Benchmarks:

(1) Chinese Benchmark Data Preparation:

chmod +x ./download_benchmark_data.sh
./download_benchmark_data.sh

(2) Fine-tuning and Inference:

Please refer to details provided in ./chinese_benchmark directory.

5. Replicate Our Analysis Results:

We provide all essential code to replicate the results (the images below) provided in our analysis section. The related codes and instructions are located in ./analysis directory. Have fun!

Citation:

If you find our paper and resources useful, please kindly cite our paper:

@article{DBLP:journals/corr/abs-2111-04198,
  author    = {Yixuan Su and
               Fangyu Liu and
               Zaiqiao Meng and
               Tian Lan and
               Lei Shu and
               Ehsan Shareghi and
               Nigel Collier},
  title     = {TaCL: Improving {BERT} Pre-training with Token-aware Contrastive Learning},
  journal   = {CoRR},
  volume    = {abs/2111.04198},
  year      = {2021},
  url       = {https://arxiv.org/abs/2111.04198},
  eprinttype = {arXiv},
  eprint    = {2111.04198},
  timestamp = {Wed, 10 Nov 2021 16:07:30 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2111-04198.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Contact

If you have any questions, feel free to contact me via ([email protected]).

tacl's People

Contributors

yxuansu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

tacl's Issues

how can i get the embedding ?

excuse me how can I extract_embeddings for specific sentences using your model . I used for example keras-bert and it has already a function called extract_embeddings as I need to make your model as an embedding layer using keras.
Appreciate your help , Thanks

Unable to replicate paper numbers on SQuAD using HF checkpoint

Hello! Thank you for releasing the paper and code for TaCL. I'm running into an issue where I'm unable to replicate the numbers in the paper, using the released HF checkpoint cambridgeltl/tacl-bert-base-uncased (to be exact, I did no pretraining on my side). My results on SQuAD, following the exact package versions listed in requirements.txt, using the default HF QA scripts, and with 8 Tesla K80's, are as follows:

Here are my results for SQuAD v1 and v2:
Screen Shot 2022-01-02 at 10 39 37 PM

Screen Shot 2022-01-02 at 10 35 53 PM

The EM/F1 (80.8 and 87.96 for V1, and 70.81 and 74.05 for V2) are lower than both the BERT and TaCL numbers in the paper, and I don't think this drop is due to a difference in hardware configurations. I wonder if this could be an issue where the HF repo has changed in the meantime. With the current version of the HF repo (obtained after git clone), maybe TaCL's performance is lower on the QA benchmark? Please advise. Thank you!

Is there a bug with the teacher model?

Hi, @yxuansu.
Thank you for sharing code and data.

https://github.com/yxuansu/TaCL/blob/d92e47cfa3c24d9b674423a01b3e4216a6b62891/pretraining/bert_contrastive.py#L66
https://github.com/yxuansu/TaCL/blob/d92e47cfa3c24d9b674423a01b3e4216a6b62891/pretraining/train.py#L76

In the above code, the teacher model does not perform the eval() operation, so whether the same input will get different results due to dropout. Is there something wrong with my understanding, and if not, have you compared the teacher model using eval()?

how cani get vocab from the model

Thanks for sharing the code . I'm trying to get vocab from this model but couldn't
the way i used is vocab=bert.get_tokenizer().get_vocab() with BERT but how can i get it from your model please

关于teacher模型选择的问题

你好,非常感谢开源代码。

我们想尝试下论文中的方法,目前我们有base和large两种自己训练的预训练模型。论文中teacher和student都使用base模型,如果用large作为teacher,效果会更好吗?

咨询训练时间

作者你好,感谢你的开源
请教一下,预训练150k,你大概用了多长时间呀?

unable to reproduce

Hi,
Thank you for sharing code and data.
I try to reproduce your english-pretraining experiment, however, the results are not expected. Will you please provide me some support ? thank you ~
image
In glue-dev evaluations:
1. original bert achieves best results.
2. our trained tacl under-perform released checkpoint by a large margin.

Pretrainning envs: 8x32G V100 / pytorch 1.6-cuda10.2 / wikipedia data downloaded and processed with provided scripts
Finetuing on GLUE envs: 1x2080ti / pytorch 1.6-cuda10.2 / huggingface default settings

Best

Hello, have you tired the roberta model?

As you have compared your results with SimCSE, which evaluate their method on Bert and RoBerta. Could you please tell me whether you have tired your method on roberta. In many cases, methods work on bert can not improve the performance of roberta.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.