Giter Site home page Giter Site logo

jovan-ioanis / nlu-project-2 Goto Github PK

View Code? Open in Web Editor NEW
1.0 2.0 0.0 551 KB

Conversational agent model for (person A-person B-person A) dialogue types, based on seq2seq model, global attention and antilanguage model

License: GNU General Public License v3.0

Python 76.77% TeX 23.15% Shell 0.08%
language-model recurrent-neural-networks tensorflow tensorflow-models natural-language-understanding natural-language-processing conversational-agents sequence-to-sequence attention-lstm

nlu-project-2's Introduction

Natural Language Understanding 2017

Project 2: Beyond seq2seq Dialog Systems

This repository contains the submission for the student project in the "Natural Language Understanding" course at ETH Zürich in spring 2017. The goal of the project was to train a conversational agent using sequence-to-sequence recurrent neural networks and further improve performance by extending the baseline seq2seq model. Improvements include: global attention and antilanguage model.

Authors

Project Description:

Can be found here.

Project Report

Can be found here.

Requirements

  • Python 3.x
  • tensorflow 1.1
  • tqdm
  • gensim

Training the model

The model can be trained using main.py. python main.py -h shows which parameters we expect:

main.py -n <num_cores> -x <experiment>
num_cores = Number of cores requested from the cluster. Set to -1 to leave unset
experiment = experiment setup that should be executed. e.g 'baseline' or 'attention'
tag = optional tag or name to distinguish the runs, e.g. 'bidirect3layers'

We expect training data triples and the cornell dataset to be in the ./data/ subdirectory with their original filenames.

Detailed configuration parameters may be changed in config.py.

Model checkpoints, summaries and the configuration are regularly stored in the ./logs/ folder (and its subdirectories).

Predicting words from the model

Model predictions may be done with predict.py. python predict.py -h shows which parameters we expect:

predict.py -n <num_cores> -x <experiment> -o <output file> -c <checkpoint>
num_cores = Number of cores requested from the cluster. Set to -1 to leave unset
experiment = experiment setup that should be executed. e.g 'baseline'
checkpoint = Path to the checkpoint to load parameters from. e.g. './logs/baseline-ep4-500'
output = where to write the prediction outputs to. e.g './predictions.out'

We will predict sentence tuples from the ./data/Validation_Shuffled_Dataset.txt conversation triples. The predictions are not printed on screen, but written to disk. (see the --output parameter).

Calculating perplexities from the model

Perplexity calculation may be done with perplexity.py. python perplexity.py -h shows which parameters we expect:

perplexity.py -n <num_cores> -x <experiment> -i <input file> -c <checkpoint>
num_cores = Number of cores requested from the cluster. Set to -1 to leave unset
experiment = experiment setup that should be executed. e.g 'baseline'
input = what dialogs to predict from. e.g './Dialog_Triples.txt'
checkpoint = Path to the checkpoint to load parameters from. e.g. './logs/baseline-ep4-500'

We also provide the ./run-test.sh script which prints perplexities using our provided checkpoint final_checkpoint.ckpt

Attributions

nlu-project-2's People

Contributors

jovan-ioanis avatar

Stargazers

 avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.