Giter Site home page Giter Site logo

yuanhy1997 / seqdiffuseq Goto Github PK

View Code? Open in Web Editor NEW
76.0 5.0 13.0 88 KB

Text Diffusion Model with Encoder-Decoder Transformers for Sequence-to-Sequence Generation [NAACL 2024]

Home Page: https://arxiv.org/abs/2212.10325

Python 95.12% Shell 4.88%

seqdiffuseq's Introduction

This is the official repo for the paper SeqDiffuSeq: Text Diffusion with Encoder-Decoder Transformers

Parts of our codes are modified from DiffusionLM and minimaldiffusion repos.

Environment

Before running our code, you may setting the environments using the following lines.

conda create -n seqdiffuseq python=3.8
conda install mpi4py
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0
pip install -r requirements.txt

Preparing dataset

For the non-translation tasks, we follows DiffuSeq for the dataset settings.

For IWSLT14 and WMT14, we follow the data preprocessing from fairseq, we also provide the processed datasets in this links. (Update 04/13/2023: Sorry for missing WMT14 data, I just uploaded it. Download from here)

Training

To run the code, we use iwslt14 en-de as an illustrative example:

  1. Prepare the data of iwslt14 under ./data/iwslt14/ directory;
  2. Learning the BPE tokenizer by
python ./tokenizer_utils.py train-byte-level iwslt14 10000 
  1. To train with the following line:
mkdir ckpts
bash ./train_scripts/iwslt_en_de.sh 0 de en
#(for en to de translation) bash ./train_scripts/iwslt_en_de.sh 0 en de

You may modify the scripts in ./train_scripts for your own training settings.

Inference

After training accomplish, you can run the following line for inference:

bash ./inference_scrpts/iwslt_inf.sh path-to-ckpts/ema_0.9999_280000.pt path-to-save-results path-to-ckpts/alpha_cumprod_step_260000.npy

The ema_0.9999_280000.pt file is the model weights and alpha_cumprod_step_260000.npy is the saved noise schedule. You have to use the most recent .npy schedule file saved before .pt model weight file.

Other Comments

Note that for all the training experiments, we all set the maximum training steps and warmups to 1000000 and 10000. For different datasets, it is needless to stop training until maximum training steps. IWSLT14 use checkpoint around 300000 training steps, WMT15 around 500000 train steps and non-translation task around 100000 train steps.

You can change the hyperparameter setting for your own experiments, maybe increasing the training batches or modify the training schedule will bring some improvements.

Citation

If you find our work and codes interesting and useful, please cite:

@article{Yuan2022SeqDiffuSeqTD,
  title={SeqDiffuSeq: Text Diffusion with Encoder-Decoder Transformers},
  author={Hongyi Yuan and Zheng Yuan and Chuanqi Tan and Fei Huang and Songfang Huang},
  journal={ArXiv},
  year={2022},
  volume={abs/2212.10325}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.