Giter Site home page Giter Site logo

d3's Introduction

D3

The implementation for ACL 2022 paper

A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation

Framework


File structure

  1. The main entrance to train the model is in train.py in the root directory. We also provide some example shells for running under different conditions.
  2. The code related to our data manipulation method D3 is under ./data_manipulation, where you can obtain augmented data by using code under this directory.
  3. ./attention_experiment contains scripts for the attention experiments (like Appendix C.1 and C.4) in our paper
  4. ./model contains scripts for all other necessary parts to run experiment, including models, optimizer, data interface and so on.

Requirements

  1. python == 3.7.0
  2. torch==1.5.0
  3. transformers==3.1.0
  4. spacy==2.2.4
  5. fairseq==0.9.0 (I downloaded the source code into the root directory)
  6. sentencepiece==0.1.94

For evaluating the generated responses, you need to install java-1.8.0, perl, java-1.8.0, as well as perl library including XML::Twig, Sort::Naturally, String::Util (I use cpanm to install them on Linux).

METEOR is also needed for evaluating the quality of responses, please unzip it and put it under ./metrics/

We also use BERTScore as a metric in our experiments, you may need to download a proper BERT model for a successful evaluation. Here we use a roberta-large model. To rescale the score, we have put the baseline file under ./bert_score/rescale_baseline/en. If you want to rescale the bert score, please add --rescale_with_baseline in the training shell.


Run the code

To be honest, just applying step 3.Data Distillation can achieve a satisfactory performance. The step 4.data diversification contribute less to the final results and is more complex.

1. Obtain PersonaChat Dataset

Obtain PersonaChat dataset via ParlAI or our zipped version and put them into the ./datasets directory.

2. Prepare models

At first, we have to get all trained models we need for data manipulation in experiments. You need go to ./data_manipulation/prepare_model.

1) NLI model for evaluating persona consistency

You need to download DialogueNLI dataset and put it under this directory. Also, download large size RoBERTa MNLI model and put it under this directory, renaming the document as roberta_mnli/.

Then you can train the NLI model using this dataset using script train_nli_model.py.

After obtain the trained best model, you need to renamed the file best_model.bin as pytorch_model.bin for the following use. Define the path that saves the trained NLI model for persona consistency as PERSONA_NLI.

We also provide our trained NLI model for downloading.

2) NLI model for evaluating coherence of dialogue history

Using the same RoBERTa MNLI model we used in 1 and train_coherence_nli.py to train it on the InferConvAI2 dataset. It is a dialogue NLI dataset designed for evaluating the coherence of dialogue history.

Save the obtained model, define the path containing the model as COHERENCE_NLI.

3) BERT and GPT2 model used in data diversification

First use extract_personas_and_responses.py to extract persona and response texts into two json files.

Download the bert-based-uncased model and gpt2-small model, put them under the corresponding directories you like. Then using finetune_bert_and_gpt2.py to fine tune BERT and GPT2 model on personas.json, obtaining BERTper and GPT2per, then fine tune GPT2 on responses.json to obtain GPT2res, editing the code to assign the model paths of BERT and GPT2 you just defined before.

4) Back translation model for dialogue history diversification

Got to directory ./BT.

Download WMT14 en-fr corpus, and pre-processing it with BPE from sentencepiece using preprocess.sh, obtaining sentence.bpe.model.

Train en-fr and fr-en translation model using train_en-fr.sh and train_fr-en.sh under this directory and the average the last 5 models using average_model.sh. Define the obtained model checkpoints as BT_EN-FR and BT-FR-EN.

3. Data Distillation

Go to ./data_augmentation/data_distillation.

Using calculate_entailment.py to obtained the predicted results given by the NLI model under PERSONA_NLI you obtained before.

Then using get_distilled_dataset.py to obtain the distilled dataset using the previously logits given by the NLI model. Assume that the obtain distilled data file is DISTILL_DATA.

4. Data diversification

1) Obtain the Multi-GPT2 model for response align under new personas

At first you need to obtain a Multi-GPT2 model trained on the distilled samples. You can use the shell train_multi_gpt2_distilled.sh under the root directory. Set the training data as DISTILL_DATA according to the definitions of config.py. Note that you should use config.json under multi_gpt2 to replace the original config.json in the initial model weight path to train this model.

2) Augment dialogue history

Then you need to augment dialogue history. Go to ./BT, using get_bt_input_file.py to transform the distilled data DISTILL_DATA into the format for back translation. Then use bpe_split.py to pre-process the newly obtained txt file with BPE.

Using evaluate.sh and evaluate_back.sh you can translate all utterance into French and then back to English.

Finally, using recover.py you can recover the txt file into its original distilled data format in a json file.

3) Editing personas and align responses

Go to ./data_augmentation/data_diversification. Using generate_new_personas_and_edit_responses.py you can obtain new personas as well as some samples with edited new responses if applicable.

Using inference_multi_gpt2.sh in the root directory you can get the predicted responses for the rest samples.

Using get_augmented_scores.py you can get the filter scores for each new sample.

Using filter_augmented_data.py you can get the filtered diversified samples along with the distilled one. They form the augmented dataset used as an easy curriculum for training.

5. Train model

Put the obtained augmented dataset into ./datasets/augmented/ and then you can train two models using train_seq2seq_D3.sh and train_gpt2_D3.sh.

d3's People

Contributors

caoyu-noob avatar

Stargazers

WuLiZeng avatar  avatar 该用户不存在或已注销 avatar furau avatar Hao Wang avatar  avatar XU ZIHANG avatar CyberXiao avatar Zhenrui avatar zjk avatar zeyu li avatar Renat Zayashnikov avatar yuxi-lib avatar  avatar Dhruv rathi avatar mengjiao gan avatar Yotam avatar ChiKin Lau avatar  avatar Ong Seong Wu avatar

Watchers

 avatar Yotam avatar

d3's Issues

'GPT2Config' object has no attribute 'shared_attention'

When I run train_gpt2.sh, I get this error:

Traceback (most recent call last):
  File "train.py", line 500, in <module>
    main()
  File "train.py", line 492, in main
    model, tokenizer = get_model_and_tokenizer(args, trainer_config, logger)
  File "train.py", line 105, in get_model_and_tokenizer
    model = GPT2DoubleHeadsModel.from_pretrained('/home/ma-user/work/baojianzhu/gpt2')
  File "/home/ma-user/anaconda3/envs/d3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 852, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/home/ma-user/work/baojianzhu/grounded/D3/model/gpt2_model.py", line 847, in __init__
    self.transformer = GPT2Model(config, sinlge_input=True)
  File "/home/ma-user/work/baojianzhu/grounded/D3/model/gpt2_model.py", line 500, in __init__
    self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True, single_input=sinlge_input) for _ in range(config.n_layer)])
  File "/home/ma-user/work/baojianzhu/grounded/D3/model/gpt2_model.py", line 500, in <listcomp>
    self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True, single_input=sinlge_input) for _ in range(config.n_layer)])
  File "/home/ma-user/work/baojianzhu/grounded/D3/model/gpt2_model.py", line 282, in __init__
    self.shared_attention = config.shared_attention
AttributeError: 'GPT2Config' object has no attribute 'shared_attention'

I checked the transformer version but didn't solve this issue. Could you give me some clues?

About the metrics.

Does the C-score metric in the paper represent the entail_score in the external_metrics_func function?

Also, is the model used to calculate the entail_score the same as the model used for Persona distillation?

Could you share your model for calculating the C-score?

Furthermore, I did not find the BSf metric in the results, how can I calculate this?

Thanks a lot!

postprocessing file missing

In dataset.py, from .postprocessing import augment_replica.
But I can't find the postprocessing file in this repo.
Could you tell me how to deal with this? Thank a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.