Giter Site home page Giter Site logo

tc-bot's Introduction

End-to-End Task-Completion Neural Dialogue Systems

An implementation of the
End-to-End Task-Completion Neural Dialogue Systems and A User Simulator for Task-Completion Dialogues.

image

This document describes how to run the simulation and different dialogue agents (rule-based, command line, reinforcement learning). More instructions to plug in your customized agents or user simulators are in the Recipe section of the paper.

Content

Data

all the data is under this folder: ./src/deep_dialog/data

  • Movie Knowledge Bases
    movie_kb.1k.p --- 94% success rate (for user_goals_first_turn_template_subsets.v1.p)
    movie_kb.v2.p --- 36% success rate (for user_goals_first_turn_template_subsets.v1.p)

  • User Goals
    user_goals_first_turn_template.v2.p --- user goals extracted from the first user turn
    user_goals_first_turn_template.part.movie.v1.p --- a subset of user goals [Please use this one, the upper bound success rate on movie_kb.1k.json is 0.9765.]

  • NLG Rule Template
    dia_act_nl_pairs.v6.json --- some predefined NLG rule templates for both User simulator and Agent.

  • Dialog Act Intent
    dia_acts.txt

  • Dialog Act Slot
    slot_set.txt

Parameter

Basic setting

--agt: the agent id
--usr: the user (simulator) id
--max_turn: maximum turns
--episodes: how many dialogues to run
--slot_err_prob: slot level err probability
--slot_err_mode: which kind of slot err mode
--intent_err_prob: intent level err probability

Data setting

--movie_kb_path: the movie kb path for agent side
--goal_file_path: the user goal file path for user simulator side

Model setting

--dqn_hidden_size: hidden size for RL (DQN) agent
--batch_size: batch size for DQN training
--simulation_epoch_size: how many dialogue to be simulated in one epoch
--warm_start: use rule policy to fill the experience replay buffer at the beginning
--warm_start_epochs: how many dialogues to run in the warm start

Display setting

--run_mode: 0 for display mode (NL); 1 for debug mode (Dia_Act); 2 for debug mode (Dia_Act and NL); >3 for no display (i.e. training)
--act_level: 0 for user simulator is Dia_Act level; 1 for user simulator is NL level
--auto_suggest: 0 for no auto_suggest; 1 for auto_suggest
--cmd_input_mode: 0 for NL input; 1 for Dia_Act input. (this parameter is for AgentCmd only)

Others

--write_model_dir: the directory to write the models
--trained_model_path: the path of the trained RL agent model; load the trained model for prediction purpose.

--learning_phase: train/test/all, default is all. You can split the user goal set into train and test set, or do not split (all); We introduce some randomness at the first sampled user action, even for the same user goal, the generated dialogue might be different.

Running Dialogue Agents

Rule Agent

python run.py --agt 5 --usr 1 --max_turn 40
	      --episodes 150
	      --movie_kb_path ./deep_dialog/data/movie_kb.1k.p
	      --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p
	      --intent_err_prob 0.00
	      --slot_err_prob 0.00
	      --episodes 500
	      --act_level 0

Cmd Agent

NL Input

python run.py --agt 0 --usr 1 --max_turn 40
	      --episodes 150
	      --movie_kb_path ./deep_dialog/data/movie_kb.1k.p
	      --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p
	      --intent_err_prob 0.00
	      --slot_err_prob 0.00
	      --episodes 500
	      --act_level 0
	      --run_mode 0
	      --cmd_input_mode 0

Dia_Act Input

python run.py --agt 0 --usr 1 --max_turn 40
	      --episodes 150
	      --movie_kb_path ./deep_dialog/data/movie_kb.1k.p 
	      --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p
	      --intent_err_prob 0.00
	      --slot_err_prob 0.00
	      --episodes 500
	      --act_level 0
	      --run_mode 0
	      --cmd_input_mode 1

End2End RL Agent

Train End2End RL Agent without NLU and NLG (with simulated noise in NLU)

python run.py --agt 9 --usr 1 --max_turn 40
	      --movie_kb_path ./deep_dialog/data/movie_kb.1k.p
	      --dqn_hidden_size 80
	      --experience_replay_pool_size 1000
	      --episodes 500
	      --simulation_epoch_size 100
	      --write_model_dir ./deep_dialog/checkpoints/rl_agent/
	      --run_mode 3
	      --act_level 0
	      --slot_err_prob 0.00
	      --intent_err_prob 0.00
	      --batch_size 16
	      --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p
	      --warm_start 1
	      --warm_start_epochs 120

Train End2End RL Agent with NLU and NLG

python run.py --agt 9 --usr 1 --max_turn 40
	      --movie_kb_path ./deep_dialog/data/movie_kb.1k.p
	      --dqn_hidden_size 80
	      --experience_replay_pool_size 1000
	      --episodes 500
	      --simulation_epoch_size 100
	      --write_model_dir ./deep_dialog/checkpoints/rl_agent/
	      --run_mode 3
	      --act_level 1
	      --slot_err_prob 0.00
	      --intent_err_prob 0.00
	      --batch_size 16
	      --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p
	      --warm_start 1
	      --warm_start_epochs 120

Test RL Agent with N dialogues:

python run.py --agt 9 --usr 1 --max_turn 40
	      --movie_kb_path ./deep_dialog/data/movie_kb.1k.p
	      --dqn_hidden_size 80
	      --experience_replay_pool_size 1000
	      --episodes 300 
	      --simulation_epoch_size 100
	      --write_model_dir ./deep_dialog/checkpoints/rl_agent/
	      --slot_err_prob 0.00
	      --intent_err_prob 0.00
	      --batch_size 16
	      --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p
	      --trained_model_path ./deep_dialog/checkpoints/rl_agent/noe2e/agt_9_478_500_0.98000.p
	      --run_mode 3

Evaluation

To evaluate the performance of agents, three metrics are available: success rate, average reward, average turns. Here we show the learning curve with success rate.

  1. Plotting Learning Curve python draw_learning_curve.py --result_file ./deep_dialog/checkpoints/rl_agent/noe2e/agt_9_performance_records.json
  2. Pull out the numbers and draw the curves in Excel

Reference

Main papers to be cited

@inproceedings{li2017end,
  title={End-to-End Task-Completion Neural Dialogue Systems},
  author={Li, Xuijun and Chen, Yun-Nung and Li, Lihong and Gao, Jianfeng and Celikyilmaz, Asli},
  booktitle={Proceedings of The 8th International Joint Conference on Natural Language Processing},
  year={2017}
}

@article{li2016user,
  title={A User Simulator for Task-Completion Dialogues},
  author={Li, Xiujun and Lipton, Zachary C and Dhingra, Bhuwan and Li, Lihong and Gao, Jianfeng and Chen, Yun-Nung},
  journal={arXiv preprint arXiv:1612.05688},
  year={2016}
}

tc-bot's People

Contributors

shangyusu avatar xiul-msr avatar xjli avatar yvchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tc-bot's Issues

Following `Train End2End RL Agent with NLU and NLG` result in README results in errors

As title,
the following are the error messages:

➜  src git:(master) python run.py --agt 9 --usr 1 --max_turn 40 \
             --movie_kb_path ./deep_dialog/data/movie_kb.1k.p \
             --dqn_hidden_size 80 \
             --experience_replay_pool_size 1000 \
             --episodes 500 \
             --simulation_epoch_size 100 \
             --write_model_dir ./deep_dialog/checkpoints/rl_agent/ \
             --run_mode 3 \
             --act_level 1 \
             --slot_err_prob 0.00 \
             --intent_err_prob 0.00 \
             --batch_size 16 \
             --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p \
             --warm_start 1 \
             --warm_start_epochs 120
Dialog Parameters:
{
  "simulation_epoch_size": 100,
  "slot_err_mode": 0,
  "diaact_nl_pairs": "./deep_dialog/data/dia_act_nl_pairs.v6.json",
  "save_check_point": 10,
  "episodes": 500,
  "predict_mode": false,
  "cmd_input_mode": 0,
  "goal_file_path": "./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p",
  "max_turn": 40,
  "experience_replay_pool_size": 1000,
  "write_model_dir": "./deep_dialog/checkpoints/rl_agent/",
  "usr": 1,
  "auto_suggest": 0,
  "run_mode": 3,
  "trained_model_path": null,
  "success_rate_threshold": 0.3,
  "nlu_model_path": "./deep_dialog/models/nlu/lstm_[1468447442.91]_39_80_0.921.p",
  "epsilon": 0,
  "batch_size": 16,
  "nlg_model_path": "./deep_dialog/models/nlg/lstm_tanh_relu_[1468202263.38]_2_0.610.p",
  "act_set": "./deep_dialog/data/dia_acts.txt",
  "movie_kb_path": "./deep_dialog/data/movie_kb.1k.p",
  "slot_err_prob": 0.0,
  "warm_start": 1,
  "warm_start_epochs": 120,
  "dict_path": "./deep_dialog/data/dicts.v3.p",
  "intent_err_prob": 0.0,
  "slot_set": "./deep_dialog/data/slot_set.txt",
  "act_level": 1,
  "dqn_hidden_size": 80,
  "agt": 9,
  "gamma": 0.9
}
warm_start starting ...
warm_start simulation episode 0: Fail
Traceback (most recent call last):
  File "run.py", line 400, in <module>
    run_episodes(num_episodes, status)
  File "run.py", line 341, in run_episodes
    warm_start_simulation()
  File "run.py", line 313, in warm_start_simulation
    episode_over, reward = dialog_manager.next_turn()
  File "/Users/samsu/Desktop/github/UserSimulator/src/deep_dialog/dialog_system/dialog_manager.py", line 76, in next_turn
    self.agent.register_experience_replay_tuple(self.state, self.agent_action, self.reward, self.state_tracker.get_state_for_agent(), self.episode_over)
  File "/Users/samsu/Desktop/github/UserSimulator/src/deep_dialog/agents/agent_dqn.py", line 217, in register_experience_replay_tuple
    state_tplus1_rep = self.prepare_state_representation(s_tplus1)
  File "/Users/samsu/Desktop/github/UserSimulator/src/deep_dialog/agents/agent_dqn.py", line 95, in prepare_state_representation
    user_act_rep[0,self.act_set[user_action['diaact']]] = 1.0
KeyError: 'UNK'

Any help would be appreciated.

Can't find the code for UserSimulator

hello, where is the code for paper A User Simulator for Task-Completion Dialogues, i can only find rule-base
simulator in usersim folder. Thank you!

No RealUser

There is not a realuser module. How to communicate with the trained agent?

Solved, my problem- Sorry. No success

Hi, firstly thanks for the paper and the code.

I'm trying to understand all the code, but when I run for example for End2End RL agent :
python run.py --agt 9 --usr 1 --max_turn 40 --movie_kb_path ./deep_dialog/data/movie_kb.1k.p --dqn_hidden_size 80 --experience_replay_pool_size 1000 --episodes 500 --simulation_epoch_size 100 --write_model_dir ./deep_dialog/checkpoints/rl_agent/ --run_mode 3 --act_level 0 --slot_err_prob 0.00 --intent_err_prob 0.00 --batch_size 16 --goal_file_path ./deep_dialog/data/user_goals_first_turn_template.part.movie.v1.p --warm_start 1 --warm_start_epochs 120

i don't get any success after warming, say:
simulation success rate 0.0, ave reward -60.0, ave turns 42.0 cur bellman err 11.8294, experience replay pool 1850 Simulation success rate 0.0, Ave reward -60.0, Ave turns 42.0, Best success rate 0

Could you help me with this? Am I running something wrongly? Thanks!

how can pickle file be used in java service

hi thank you for your great work !
i just trained a model with TC-Bot ,if i want to use that pickle model in java ? how can i convert it ?
or is there Tensorflow version of you project?

from agent import Agent

Hi,

Thank you published your code. But I have some questiones :1. your python version is 2.x right?2. What is the 'agent'? How can I use it in python3.6?

RealUser not defined

Just a small bug, maybe you forgot to remove those lines, or you're currently working on another User?
On line 184 of run.py you create a RealUser object which is not imported/implemented.

if usr == 0:
    user_sim = RealUser(movie_dictionary, act_set, slot_set, goal_set, usersim_params)

You also wrote in the parameters help:
parser.add_argument('--usr', dest='usr', default=0, type=int, help='Select a user simulator. 0 is a Frozen user simulator.')
What is a Frozen user simulator?

Thanks!

Python3 support

Will we ever get Python3 support TC-Bot. Looks very promising and I'm interested in using it...

Fix to "ValueError: could not convert string to float:"

When I try to run the code with Rule Agent's prams, I got this:
ValueError: could not convert string to float:
My solution is changing the file-pointer loading style from "rb" to "r", and it works for me.
Two place need to fix:

  • line 50 in nlu.py

  • line138 in nlg.py

I know someone may not encounter the error, and just could enjoy the code perfectly.

About data

In your paper, you said that ". The raw conversational data were collected via Amazon Mechanical Turk, with annotations provided by domain experts. In total, we have labeled 280 dialogues " But I can't find any label data in this code. Do you use any label data in this experiment?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.