Giter Site home page Giter Site logo

rguthrie3 / deeplearningfornlpinpytorch Goto Github PK

View Code? Open in Web Editor NEW
1.9K 99.0 462.0 81 KB

An IPython Notebook tutorial on deep learning for natural language processing, including structure prediction.

License: MIT License

Jupyter Notebook 100.00%
nlp pytorch deep-learning tutorial lstm neural-network

deeplearningfornlpinpytorch's Introduction

Table of Contents:

  1. Introduction to Torch's Tensor Library
  2. Computation Graphs and Automatic Differentiation
  3. Deep Learning Building Blocks: Affine maps, non-linearities, and objectives
  4. Optimization and Training
  5. Creating Network Components in Pytorch
  • Example: Logistic Regression Bag-of-Words text classifier
  1. Word Embeddings: Encoding Lexical Semantics
  • Example: N-Gram Language Modeling
  • Exercise: Continuous Bag-of-Words for learning word embeddings
  1. Sequence modeling and Long-Short Term Memory Networks
  • Example: An LSTM for Part-of-Speech Tagging
  • Exercise: Augmenting the LSTM tagger with character-level features
  1. Advanced: Dynamic Toolkits, Dynamic Programming, and the BiLSTM-CRF
  • Example: Bi-LSTM Conditional Random Field for named-entity recognition
  • Exercise: A new loss function for discriminative tagging

What is this tutorial?

I am writing this tutorial because, although there are plenty of other tutorials out there, they all seem to have one of three problems:

  • They have a lot of content on computer vision and conv nets, which is irrelevant for most NLP (although conv nets have been applied in cool ways to NLP problems).
  • Pytorch is brand new, and so many deep learning for NLP tutorials are in older frameworks, and usually not in dynamic frameworks like Pytorch, which have a totally different flavor.
  • The examples don't move beyond RNN language models and show the awesome stuff you can do when trying to do lingusitic structure prediction. I think this is a problem, because Pytorch's dynamic graphs make structure prediction one of its biggest strengths.

Specifically, I am writing this tutorial for a Natural Language Processing class at Georgia Tech, to ease into a problem set I wrote for the class on deep transition parsing. The problem set uses some advanced techniques. The intention of this tutorial is to cover the basics, so that students can focus on the more challenging aspects of the problem set. The aim is to start with the basics and move up to linguistic structure prediction, which I feel is almost completely absent in other Pytorch tutorials. The general deep learning basics have short expositions. Topics more NLP-specific received more in-depth discussions, although I have referred to other sources when I felt a full description would be reinventing the wheel and take up too much space.

Dependency Parsing Problem Set

As mentioned above, here is the problem set that goes through implementing a high-performing dependency parser in Pytorch. I wanted to add a link here since it might be useful, provided you ignore the things that were specific to the class. A few notes:

  • There is a lot of code, so the beginning of the problem set was mainly to get people familiar with the way my code represented the relevant data, and the interfaces you need to use. The rest of the problem set is actually implementing components for the parser. Since we hadn't done deep learning in the class before, I tried to provide an enormous amount of comments and hints when writing it.
  • There is a unit test for every deliverable, which you can run with nosetests.
  • Since we use this problem set in the class, please don't publically post solutions.
  • The same repo has some notes that include a section on shift-reduce dependency parsing, if you are looking for a written source to complement the problem set.
  • The link above might not work if it is taken down at the start of a new semester.

References:

  • I learned a lot about deep structure prediction at EMNLP 2016 from this tutorial on Dynet, given by Chris Dyer and Graham Neubig of CMU and Yoav Goldberg of Bar Ilan University. Dynet is a great package, especially if you want to use C++ and avoid dynamic typing. The final BiLSTM CRF exercise and the character-level features exercise are things I learned from this tutorial.
  • A great book on structure prediction is Linguistic Structure Prediction by Noah Smith. It doesn't use deep learning, but that is ok.
  • The best deep learning book I am aware of is Deep Learning, which is by some major contributors to the field and very comprehensive, although there is not an NLP focus. It is free online, but worth having on your shelf.

Exercises:

There are a few exercises in the tutorial, which are either to implement a popular model (CBOW) or augment one of my models. The character-level features exercise especially is very non-trivial, but very useful (I can't quote the exact numbers, but I have run the experiment before and usually the character-level features increase accuracy 2-3%). Since they aren't simple exercises, I will soon implement them myself and add them to the repo.

Suggestions:

Please open a GitHub issue if you find any mistakes or think there is a particular model that would be useful to add.

deeplearningfornlpinpytorch's People

Contributors

rguthrie3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeplearningfornlpinpytorch's Issues

.creator depreciated

The .creator attribute of the Autograd.variable class appears to have been renamed .grad_fn in the newest release of Pytorch. Thanks for the great tutorial.

NGram and CBOW implementation

I have a question regarding your implementation of NGram (and, regardingly, CBOW which I have adapted). According to the code presented you're creating a two-layer perceptron, and the second linear layer is outputting a tensor of dimensions (out_dim, vocab_size), which is okay as long as you're not trying to train the embeddings on a real corpus with vocab size of, say, 200 thousand tokens which clogs CUDA RAM for good. I cannot see how this makes sense to train the embeddings separately for each batch of several texts. Could you be so kind to explain?

LSTM postagging example

I think you should also give the hidden state and cell state otherwise forward method complains:
lstm_out, self.hidden = self.lstm(embeds.view(len(sentence), 1, -1), self.hidden)

Official pytorch tutorials

Hi,

We've included your tutorial in official pytorch tutorials here.
it'll be useful for community to have all the awesome tutorials in one place. I also think that html documentation is more approachable than a notebook.

The current page is almost exactly same as your notebook. I've just raised a PR to break the tutorial into a few parts to make each bite-sized. Would you approve of such a reorganisation? Your feedback would be valuable.

You can contact me at [email protected]

Sasank.

Batch Data Loading and Processing

Any plan to convert some of the code base (like the LSTM/NER) to allow for minibatch processing?
Currently, all of them take one instance at a time. Any pointers in that regard would be useful as well.

Bi-LSTM+CRF error

hello, I try to run BiLSTM_CRF example. I add one example to training_data and get
"""
training_data = [(
"the wall reported apple corporation made money".split(),
"B I O B I O O".split()
), (
"georgia tech is a university in georgia".split(),
"B I O O O O B".split()
), ("China".split(), 'B'.split())
]
"""
And then, when I run
"""
precheck_sent = prepare_sequence(training_data[2][0], word_to_ix)
print model(precheck_sent)
"""
I get an output 3, which means Start_TAG.

And then I try to change the code of function _viterbi_decode
I change
"""
terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
best_tag_id = argmax(terminal_var)
path_score = terminal_var[0][best_tag_id]
best_path = [best_tag_id]
"""
to
"""
terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]]
temp = terminal_var.data
temp[0][self.tag_to_ix[STOP_TAG]] = -10000
temp[0][self.tag_to_ix[START_TAG]] = -10000
temp = torch.autograd.Variable(temp)
best_tag_id = argmax(temp)
path_score = temp[0][best_tag_id]
best_path = [best_tag_id]`
"""
then I get a correct output 1, which means 'B'.

I wonder to know why it produces a completely wrong answer, START_TAG. Thank you!

Code format

I realise that some of the code are breaks pep8. Mind if I fix those? Blame my OCD. Reading the code makes me a little anxious.

Fails on 2nd last cell

model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)

Kindly fix

Jayanta/Kolkata/India

TypeError Traceback (most recent call last)
in ()
----> 1 model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM)
2 optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4)

in init(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim)
27
28 self.word_embeds = nn.Embedding(vocab_size, embedding_dim)
---> 29 self.lstm = nn.LSTM(embedding_dim, hidden_dim/2, num_layers=1, bidirectional=True)
30
31 # Maps the output of the LSTM into tag space.

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/rnn.py in init(self, *args, **kwargs)
370
371 def init(self, *args, **kwargs):
--> 372 super(LSTM, self).init('LSTM', *args, **kwargs)
373
374

~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/rnn.py in init(self, mode, input_size, hidden_size, num_layers, bias, batch_first, dropout, bidirectional)
37 layer_input_size = input_size if layer == 0 else hidden_size * num_directions
38
---> 39 w_ih = Parameter(torch.Tensor(gate_size, layer_input_size))
40 w_hh = Parameter(torch.Tensor(gate_size, hidden_size))
41 b_ih = Parameter(torch.Tensor(gate_size))

TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (float, int), but expected one of:

  • no arguments
  • (int ...)
    didn't match because some of the arguments have invalid types: (float, int)
  • (torch.FloatTensor viewed_tensor)
  • (torch.Size size)
  • (torch.FloatStorage data)
  • (Sequence data)

Check predictions before training

precheck_sent = prepare_sequence(training_data[0][0], word_to_ix)

precheck_tags = torch.LongTensor([ tag_to_ix[t] for t in training_data[0][1] ])

print(model(precheck_sent))

Solution Timeline

Hi, it is very nice for you to write such a great tutorial. I got this from the pytorch official tutorial and I think it can help a lot of people. But do you have any time line of posting the solution of the exercises? I find these exercises useful but I am not sure whether my answer is right since I am not in the area of NLP. So it will be extremely nice if you can give a simple but instructive solution.

Thank you!

Pytorch tutorials exercices

Hi Robert,

I followed your Pytorch tutorials for NLP and I implemented 2 exercices that you proposed.
The Cbow and the character level enriching of word embeddings of the POS tagger.

I would appreciate if you review/comment my implementation and why not discuss about it.

Here is the repo's link: https://github.com/MokaddemMouna/Pytorch

Thanks.

I think you meant...

"Matrices and vectors are special cases of torch.Tensors, where their dimension is 1 and 2 respectively." I think you meant 2 and 1 respectively. Thanks

Python 2 to 3 Please

The very wonderful notebook is in Python 2

It is all about () in print

Can you please convert to python3 and for those who prefer py2
from future import unicode_literals, print_function, division

Regards
Jayanta/Kolkata/India

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.