Giter Site home page Giter Site logo

cooelf / deeputteranceaggregation Goto Github PK

View Code? Open in Web Editor NEW
253.0 6.0 42.0 66 KB

Modeling Multi-turn Conversation with Deep Utterance Aggregation (COLING 2018)

Home Page: https://www.aclweb.org/anthology/C18-1317

Python 99.46% Shell 0.54%
dua dialogue multiturn utterance response selection

deeputteranceaggregation's Introduction

Code and sample data accompanying the paper Modeling Multi-turn Conversation with Deep Utterance Aggregation.

Dataset

We release E-commerce Dialogue Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of E-commerical Conversation Corpus are shown in the following table.

Train Val Test
Session-response pairs 1m 10k 10k
Avg. positive response per session 1 1 1
Min turn per session 3 3 3
Max ture per session 10 10 10
Average turn per session 5.51 5.48 5.64
Average Word per utterance 7.02 6.99 7.11

The full corpus can be downloaded from https://drive.google.com/file/d/154J-neBo20ABtSmJDvm7DK0eTuieAuvw/view?usp=sharing.

Data template

label \t conversation utterances (splited by \t) \t response

Source Code

We also release our source code to help others reproduce our result

Instruction

Our code is compatible with python2 so for all commands listed below python is python2

We strongly suggest you to use conda to control the virtual environment

  • Install requirement

    pip install -r requirements.txt

  • Pretrain word embedding

    python train_word2vec.py ./ECD_sample/train embedding

  • Preprocess the data

    python PreProcess.py --train_dataset ./ECD_sample/train --valid_dataset ./ECD_sample/valid --test_dataset ./ECD_sample/test --pretrained_embedding embedding --save_dataset ./ECD_sample/all

  • Train the model

    bash train.sh

Tips

If you encounter some cuda issues, please check your environment. For reference,

Theano 0.9.0
Cuda 8.0
Cudnn 5.1

Reference

If you use this code please cite our paper:

@inproceedings{zhang2018dua,
    title = {Modeling Multi-turn Conversation with Deep Utterance Aggregation},
    author = {Zhang, Zhuosheng and Li, Jiangtong and Zhu, Pengfei and Zhao, Hai},
    booktitle = {Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018)},
    pages={3740--3752},
    year = {2018}
}

deeputteranceaggregation's People

Contributors

cooelf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deeputteranceaggregation's Issues

Baseline code

Please can you share the code source you used to reproduce the baseline results on your new dataset ?

python PreProcess.py --train_dataset ./ECD_sample/train --valid_dataset ./ECD_sample/valid --test_dataset ./ECD_sample/test --pretrained_embedding embedding --save_dataset ./ECD_sample/all

Traceback (most recent call last):
File "PreProcess.py", line 206, in
ParseMultiTurn()
File "PreProcess.py", line 200, in ParseMultiTurn
word2vec = WordVecs(args.pretrained_embedding, vocab, True, True)
File "PreProcess.py", line 114, in init
self.W, self.word_idx_map = self.get_W(word_vecs, k=self.k)
File "PreProcess.py", line 127, in get_W
W[i] = word_vecs[word]
ValueError: could not broadcast input array from shape (200) into shape (2582)

what should i do now.somebody help me out.please

TypeError: ('An update must have the same type as the original shared variable

ub16c9@ub16c9-gpu:/media/ub16c9/fcd84300-9270-4bbd-896a-5e04e79203b7/ub16_prj/DeepUtteranceAggregation$ bash train.sh
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
image shape (10, 2, 50, 50)
filter shape (8, 2, 3, 3)
/usr/local/lib/python2.7/dist-packages/theano/tensor/nnet/conv.py:98: UserWarning: theano.tensor.nnet.conv.conv2d is deprecated. Use theano.tensor.nnet.conv2d instead.
warnings.warn("theano.tensor.nnet.conv.conv2d is deprecated."
/media/ub16c9/fcd84300-9270-4bbd-896a-5e04e79203b7/ub16_prj/DeepUtteranceAggregation/model.py:515: UserWarning: DEPRECATION: the 'ds' parameter is not going to exist anymore as it is going to be replaced by the parameter 'ws'.
self.output =theano.tensor.signal.pool.pool_2d(input=conv_out_tanh, ds=self.poolsize, ignore_border=True,mode="max")
Traceback (most recent call last):
File "main.py", line 489, in
val_frequency=args.val_frequency)
File "main.py", line 408, in main
givens=dic, on_unused_input='ignore')
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')
ub16c9@ub16c9-gpu:/media/ub16c9/fcd84300-9270-4bbd-896a-5e04e79203b7/ub16_prj/DeepUtteranceAggregation$

Question about "Turns-aware Aggregation"

Question about "Turns-aware Aggregation"

It is a bit strange that you concatenate the representation of each utterance/response and the last utterance along the last axis (https://github.com/cooelf/DeepUtteranceAggregation/blob/master/main.py#L327).
Can you give some explanations?

Ps.
If I did not misunderstand your idea, you concatenate the representation of each utterance/response (S_j = [h_{j,1}, h_{j,2}, ..., h_{j,n}]) and the last utterance (S_t = [h_{t,1}, h_{t,2}, ..., h_{t,n}]). What we get is [[h_{j,1}, h_{t,1}], [h_{j,2}, h_{t,2}], ..., [h_{j,n}, h_{t,n}]].
That is, the representations at same timestep is concatenated as one vector. I don't think it is reasonable, since it means little that two words from two distinct sentences are at the same timestep.

Training speed

Hi!

Could you please tell what training speed do you get with this code?

Question about Turns-aware Aggregation Encoding

您好,我是重庆大学的研究生。在阅读您的这篇文章遇到了一个问题,关于Turns-aware Aggregation Encoding这一部分,对于message和其他句子进行了一个连接方式有一个疑问,下面是具体的疑问:
假如现在的句子连接以前shape是50200(50个词,200维的词向量),那么是不是进过连接以后变成了100200。就是经过连接以后,一个对话从50个词变成了100个词?
对于这个连接是否是这样变化,一直不是很确定,希望您可以得到解答,谢谢啦。

original taobao data.

the data about taobao is tokenized, do you have the original data, not tokenized, if you have, could you please give the original data to me. if possible, it is ok to give a url or send to me, my e-mail is [email protected], thank you very much.

About the pretrained word2vec model you used.

Hello, thank you for sharing the E-commerce dataset!
However, about the pretrained word2vec model, I could only find a script in this repository but it seems insufficient for reproducing the result of your paper. Could I ask the word2vec model you used or could you please tell me how did you pretrain the word embedding model? (the setting, the hyperparameters, the training dataset, etc. )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.