Giter Site home page Giter Site logo

pytorch-seq2seq-intent-parsing's Introduction

PyTorch Seq2Seq Intent Parsing

Reframing intent parsing as a human - machine translation task. Work in progress successor to torch-seq2seq-intent-parsing

The command language

This is a simple command language developed for the "home assistant" Maia living in my apartment. She's designed as a collection of microservices with services for lights (Hue), switches (WeMo), and info such as weather and market prices.

A command consists of a "service", a "method", and some number of arguments.

lights setState office_light on
switches getState teapot
weather getWeather "San Francisco"
price getPrice TSLA

These can be represented with variable placeholders:

lights setState $device $state
switches getState $device
weather getWeather $location
price getPrice $symbol

We can imagine a bunch of human sentences that would map to a single command:

"Turn the office light on."
"Please turn on the light in the office."
"Maia could you set the office light on, thank you."

Which could similarly be represented with placeholders.

TODO: Specific vs. freeform variables

A shortcoming of the approach so far is that the model has to learn translations of specific values, for example mapping all of the device names to their equivalent device_name. If we added a "basement light" the model would have no basement_light in the output vocabulary unless it was re-trained.

The bigger the potential input space, the more obvious the problem - consider the getWeather command, where the model would need to be trained with every possible location we might ask about. Worse yet, consider a playMusic command that could take any song or artist name...

This can be solved with a technique which I have implemented in Torch here. The training pairs have "variable placeholders" in the output translation, which the model generates during an intial pass. Then the network fills in the values of these placeholders with an additional pass over the input.

pytorch-seq2seq-intent-parsing's People

Contributors

spro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-seq2seq-intent-parsing's Issues

Index 1 is out of range

Hi Sean,
Thanks for making this open source! I have been working on a similar implementation in tensorflow, and wanted to port to pytorch. To this end, I was checking out your project to try to learn how to do it, and I came upon the following error right away. Do you have an idea of what's going on here?
I am obviously very new to pytorch, so even basic suggestions could be helpful.
(also note that you have to use 0.1.1 of torchtext for this to work.)

output lang = DictionaryLang(size = 41)
  0%|          | 0/1193514 [00:00<?, ?it/s]Loading word vectors from /Users/thomas/Downloads/glove.twitter.27B/glove.twitter.27B.100d.txt
100%|██████████| 1193514/1193514 [03:48<00:00, 5215.09it/s]
input lang = GloVeLang(size = 100)
Training for 200 epochs...
Traceback (most recent call last):
  File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1596, in <module>
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1023, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/Users/thomas/projects/apollo/torchtest/pytorch-seq2seq-intent-parsing/train.py", line 83, in <module>
    loss = train(input_variable, target_variable)
  File "/Users/thomas/projects/apollo/torchtest/pytorch-seq2seq-intent-parsing/train.py", line 48, in train
    decoder_output, decoder_hidden, decoder_attention = decoder(decoder_input, decoder_hidden, encoder_outputs)
  File "/Library/Python/2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/Users/thomas/projects/apollo/torchtest/pytorch-seq2seq-intent-parsing/model.py", line 59, in forward
    rnn_output, hidden = self.gru(word_embedded, last_hidden)
  File "/Library/Python/2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
    result = self.forward(*input, **kwargs)
  File "/Library/Python/2.7/site-packages/torch/nn/modules/rnn.py", line 162, in forward
    output, hidden = func(input, self.all_weights, hx)
  File "/Library/Python/2.7/site-packages/torch/nn/_functions/rnn.py", line 351, in forward
    return func(input, *fargs, **fkwargs)
  File "/Library/Python/2.7/site-packages/torch/nn/_functions/rnn.py", line 244, in forward
    nexth, output = func(input, hidden, weight)
  File "/Library/Python/2.7/site-packages/torch/nn/_functions/rnn.py", line 84, in forward
    hy, output = inner(input, hidden[l], weight[l])
  File "/Library/Python/2.7/site-packages/torch/autograd/variable.py", line 76, in __getitem__
    return Index.apply(self, key)
  File "/Library/Python/2.7/site-packages/torch/autograd/_functions/tensor.py", line 16, in forward
    result = i.index(ctx.index)
IndexError: index 1 is out of range for dimension 0 (of size 1)

Freeform variables

Hello Sean,

First of all, thank you so much for open sourcing your code in torch7/pytorch. I have a simple question for you: When do you plan to port the freeform variables capability ?

Thanks again

intent and slot joint detection

hey spro,

Thanks for sharing many Pytorch tutorials in NLP, the code is clean and really helps.

I am wondering is there a plan for you to release the intent and slot joint detection in this project ?
if it's possible, that would be great.

Have a good day. :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.