Giter Site home page Giter Site logo

hltchkust / moel Goto Github PK

View Code? Open in Web Editor NEW
72.0 6.0 14.0 8.72 MB

MoEL: Mixture of Empathetic Listeners

License: MIT License

Python 97.11% Perl 2.89%
transformer chatbot empathy dialogue-systems mixture-of-experts dialogue-generation transformer-pytorch

moel's Introduction

MoEL: Mixture of Empathetic Listeners

License: MIT

This is the PyTorch implementation of the paper:

MoEL: Mixture of Empathetic Listeners. Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, Pascale Fung EMNLP 2019 [PDF]

This code has been written using PyTorch >= 0.4.1. If you use any source codes or datasets included in this toolkit in your work, please cite the following paper. The bibtex is listed below:

@article{lin2019moel,
  title={MoEL: Mixture of Empathetic Listeners},
  author={Lin, Zhaojiang and Madotto, Andrea and Shin, Jamin and Xu, Peng and Fung, Pascale},
  journal={arXiv preprint arXiv:1908.07687},
  year={2019}
}

Abstract

Previous research on empathetic dialogue systems has mostly focused on generating responses given certain emotions. However, being empathetic not only requires the ability of generating emotional responses, but more importantly, requires the understanding of user emotions and replying appropriately. In this paper, we propose a novel end-to-end approach for modeling empathy in dialogue systems: Mixture of Empathetic Listeners (MoEL). Our model first captures the user emotions and outputs an emotion distribution. Based on this, MoEL will softly combine the output states of the appropriate Listener(s), which are each optimized to react to certain emotions, and generate an empathetic response. Human evaluations on empathetic-dialogues dataset confirm that MoEL outperforms multitask training baseline in terms of empathy, relevance, and fluency. Furthermore, the case study on generated responses of different Listeners shows high interpretability of our model.

MoEL Architecture:

The proposed model Mixture of Empathetic Listeners, which has an emotion tracker, n empathetic listeners along with a shared listener, and a meta listener to fuse the information from listeners and produce the empathetic response.

Attention on the Listeners

The visualization of attention on the listeners: The left side is the context followed by the responses generated by MoEL. The heat map illustrate the attention weights on 32 listeners

Dependency

Check the packages needed or simply run the command

❱❱❱ pip install -r requirements.txt

Pre-trained glove embedding: glove.6B.300d.txt inside folder /vectors/.

Experiment

Quick Result

To skip training, please check generation_result.txt.

Dataset

The dataset (empathetic-dialogue) is preprocessed and stored in npy format: sys_dialog_texts.train.npy, sys_target_texts.train.npy, sys_emotion_texts.train.npy which consist of parallel list of context (source), response (target) and emotion label (additional label).

Training&Test

MoEL

❱❱❱ python3 main.py --model experts  --label_smoothing --noam --emb_dim 300 --hidden_dim 300 --hop 1 --heads 2 --topk 5 --cuda --pretrain_emb --softmax --basic_learner --schedule 10000 --save_path save/moel/

Transformer baseline

❱❱❱ python3 main.py --model trs  --label_smoothing --noam --emb_dim 300 --hidden_dim 300 --hop 2 --heads 2 --cuda --pretrain_emb --save_path save/trs/

Multitask Transformer baseline

❱❱❱ python3 main.py --model trs  --label_smoothing --noam --emb_dim 300 --hidden_dim 300 --hop 2 --heads 2 --cuda --pretrain_emb --multitask --save_path save/multi-trs/

moel's People

Contributors

zlinao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

moel's Issues

Bug in beam search of Expert

Hi all,

Thank you for open sourcing this repository.
I'm having trouble when decoding an expert model.

In particular, all I did was clone the repo and run the following command:

python3 main.py --model experts  --label_smoothing --noam --emb_dim 300 --hidden_dim 300 --hop 1 --heads 2 --topk 5 --cuda --pretrain_emb --softmax --basic_learner --schedule 10000 --save_path save/moel/

It seems like the model finishes training, and in line 97 of main.py, when trying to run evaluation, I get the following traceback:

Traceback (most recent call last):
  File "main.py", line 97, in <module>
    loss_test, ppl_test, bce_test, acc_test, bleu_score_g, bleu_score_b= evaluate(model, data_loader_tst ,ty="test", max_dec_step=50)
  File "/home/repos/empathetic_systems/MoEL/model/common_layer.py", line 824, in evaluate
    sent_b = t.beam_search(batch, max_dec_step=max_dec_step)
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 273, in beam_search
    active_inst_idx_list = beam_decode_step(inst_dec_beams, len_dec_seq, src_seq, src_enc, inst_idx_to_position_map, n_bm, enc_batch_extend_vocab, extra_zeros, mask_src, encoder_db, mask_transformer_db, DB_ext_vocab_batch)
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 205, in beam_decode_step
    dec_seq = prepare_beam_dec_seq(inst_dec_beams, len_dec_seq)
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 157, in prepare_beam_dec_seq
    dec_partial_seq = [b.get_current_state() for b in inst_dec_beams if not b.done]
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 157, in <listcomp>
    dec_partial_seq = [b.get_current_state() for b in inst_dec_beams if not b.done]
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 33, in get_current_state
    return self.get_tentative_hypothesis()
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 90, in get_tentative_hypothesis
    hyps = [self.get_hypothesis(k) for k in keys]
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 90, in <listcomp>
    hyps = [self.get_hypothesis(k) for k in keys]
  File "/home/repos/empathetic_systems/MoEL/utils/beam_omt_experts.py", line 100, in get_hypothesis
    hyp.append(self.next_ys[j+1][k])
IndexError: tensors used as indices must be long, byte or bool tensors

In particular, I'm seeing that the k value, as well as self.prev_ks, gets changed somewhere along the way and no longer hold indices but instead stores float values.

(Pdb) pp k                                                                              
tensor(0.0014, device='cuda:0')    

(Pdb) self.prev_ks                                                                                                                                                               
[tensor([0.0026, 0.0054, 0.0003, 0.0069, 0.0075], device='cuda:0'), tensor([1.4338e-03, 1.0014e+00, 2.0014e+00, 7.5908e-04, 1.0008e+00],                                         
       device='cuda:0')]      

Do you have any pointers on how to resolve this?

Thank you in advance.

Questions about code reproduction

When I ran the code locally, I successfully ran the program according to the README.md, but due to the patience of the program, my program ended early and entered the testing stage. I ran TRS and MultiTRS. The two programs were about 16000. When entering the testing stage when stepping, is this normal?
When I ran the MoEL, I finally got the result:
EVAL Loss PPL Accuracy Bleu_g Bleu_b
test 3 .7540 42.6931 0.37 2.85 2.87
How can I set up to reproduce the results of the paper?

Thanks

about metric

Hi,

This is a very nice work! While there are some detials that I'm confused:

  1. Does the Bleu means average Bleu score like the original paper "Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset"?
  2. Why don't you compare the basline result in "Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset"?

dataset process

Thanks for your contribution. I am interesting your work.

May I ask for the code of processing the Empatheticdialogues?

Thanks a lot.

Inference

Dear Authors,

Thank you for providing code.

I want to use the trained model to generate responses for a new text. Is there is a way to do the inference part for a new text.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.