Giter Site home page Giter Site logo

bentrevett / pytorch-seq2seq Goto Github PK

View Code? Open in Web Editor NEW
5.2K 64.0 1.3K 6.17 MB

Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.

License: MIT License

Jupyter Notebook 100.00%
pytorch seq2seq sequence-to-sequence tutorial rnn gru lstm torchtext pytorch-tutorial pytorch-implmention

pytorch-seq2seq's Introduction

PyTorch Seq2Seq

This repo contains tutorials covering understanding and implementing sequence-to-sequence (seq2seq) models using PyTorch, with Python 3.9. Specifically, we'll train models to translate from German to English.

If you find any mistakes or disagree with any of the explanations, please do not hesitate to submit an issue. I welcome any feedback, positive or negative!

Getting Started

Install the required dependencies with: pip install -r requirements.txt --upgrade.

We'll also make use of spaCy to tokenize our data which requires installing both the English and German models with:

python -m spacy download en_core_web_sm
python -m spacy download de_core_news_sm

Tutorials

Legacy Tutorials

Previous versions of these tutorials used features from the torchtext library which are no longer available. These are stored in the legacy directory.

References

Here are some things I looked at while making these tutorials. Some of it may be out of date.

pytorch-seq2seq's People

Contributors

azadyasar avatar bentrevett avatar hawkeoni avatar helges avatar m-e-r-c-u-r-y avatar rakeshchada avatar statquest avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-seq2seq's Issues

Attention mechanism

I see that there are many types of attention, for example from Allennlp library, we can import:
LinearAttention, CosineAttention, BilinearAttention, DotProductAttention, BahdanauAttention.
What type of attention do you use in the examples? I maybe can find out studying each of the above and compare with the attention implementations in the examples, but I think it could help me if you could provide some information of your choices. Thanks a lot.

Inference in transformer seq2seq (ex. 6)

Hi!

I was working on the code of the last tutorial (the transformer), and I have some doubts about how to do the inference correctly.

def translate_sentence(model, tokenized_sentence):
    model.eval()
    tokenized_sentence = ['<sos>'] + [t for t in tokenized_sentence] + ['<eos>']
    numericalized = [SRC.vocab.stoi[t] for t in tokenized_sentence] 
    tensor = torch.LongTensor(numericalized).unsqueeze(1).to(device) 
    translation_tensor_logits = model(tensor, None) 
    translation_tensor = torch.argmax(translation_tensor_logits.squeeze(1), 1)
    translation = [TRG.vocab.itos[t] for t in translation_tensor]
    translation, attention = translation[1:], attention[1:]
    return translation

I guess that is something like this, but it gives an error in the call to
model(tensor, None),
as it expects a target tensor as the second parameter.

using the src tensor twice:
model(tensor, tensor)
returns an answer, in the correct format, but I'am not sure about it and it doesn't look very good (but that will be probably because I trained for 2 epochs and used small numbers for hiperparameters).

Can anyone help with this? thanks

length of target in inference

Hello, thanks for your writing.
I have a question related to the length of target sentence in your fourth notebook.
trg = torch.zeros((20, src.shape[1]), dtype=torch.long).fill_(self.sos_idx).to(src.device)

How can we fix 20 in this case?

forward function in encoder

Thank you for your nice tutorial.

I have some questions.

As I understand, the encoder should take input and previous hidden state, but your encoder only takes input data.
could you explain it?

found a small typo

Hi, bentrevett.
I really love your tutorial. Nowadays, I literally scrutinize your tutorial to improve my NLP knowledge 👍

BTW, In tutorial 3 - Neural Machine Translation by Jointly Learning to Align and Translate, Seq2Seq section , I think there is a small typo.

This seq2seq encapsulator is similar to the last two. The only difference is that the encoder returns both the final hidden state (which is the final hidden state from both the forward and backward encoder RNNs passed through a linear layer) to be used as the initial hidden state for the encoder,

I think the word "encoder" should be replaced by decoder, since the final hidden state of the encoder used to initialize the decoder ! And also Briefly going over all of the decoding steps: is a redundancy, cause the next sentence is also Briefly going over all of the steps:

Thanks for sharing your tutorial again.

Question on Tutorial 3 (rnn output explanation)

Hi,
the tutorial gives the following interpretation/explanation for the output of rnn :
You can think of the third axis as being the forward and backward hidden states stacked on top of each other
But here I would like to ask for some help. I would understand this as a concatenation rather than as stacking. Do we say the same thing with other words?
To me stacking would be a kind of nesting for example if we had [src sent len, batch size, hid dim , num directions] instead of [src sent len, batch size, hid dim * num directions] . Could you elaborate?
Thanks!

multilayer decoder

For lesson 3, is it possible to have a tutorial for multilayer on the decoder part? I am kind of confuse on how to pass the hidden state and cell of a multilayer encoder with LSTM into a multilayer LSTM decoder

Why the attention is a function of encoder's output?

In the 3rd Jupyter notebook, the attention layer is a function of the encoder's output and decoder's hidden state. This is in the case that in the reference article and also the in the text explanation of the same Jupyter notebook, it is stated that the attention is a function of both encoder's and decoder's hidden state:

Next up is the attention layer. This will take in the previous hidden state of the decoder, $s_{t-1}$, and all of the stacked forward and backward hidden states from the encoder, $H$.

Is it ok to use the encoder's output instead of its hidden state? If it is, why not use the decoder's output instead of its hidden state?

Discussion about reversing the order of the source sentence

Hi, again thanks for the nice tutorial.

I run the experiment with reversing the order of the source sentence as in your tutorial and using the original order. It turned out that using the original order gave better result in my data (I used Japanese-English translation data http://cs.stanford.edu/~rpryzant/jesc/).

Do you have some idea or intuition behind reversing the order of the sentence? I read the original paper and it seems that it relates to the problem of 'minimal time lag'. But, I am not sure why it doesn't work on my data.

Thanks.

tutorial 1, equations for decoder

Thank you very much for this epic notebook. It helped me understand a lot of thinks, and this is in progress. I have a question.
The equations for decoder are:
$$\begin{align*}
(s_t^1, c_t^1) = \text{DecoderLSTM}^1(y_t, (s_{t-1}^1, c_{t-1}^1))\
(s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))
\end{align*}$$
but, we say later on that: " This produces an output (hidden state from the top layer of the RNN), a new hidden state (one for each layer, stacked on top of each other) and a new cell state (also one per layer, stacked on top of each other)"
So, for the decoder of the second layer, I would expect to have also the c_t^1 as its input:
s_t^1, c_t^1, (s_{t-1}^2, c_{t-1}^2

lesson 2 decoder impl output to linear

//feed, embedding+context and hidden state to RNN.
output, hidden = self.rnn(emb_con, hidden)

Shouldn't the line below be concat (embedding + output + context) instead of (embedding + hidden + context) ? Calculated hidden vector by RNN at time step t will be fed to next rnn at t+1 anyway.

output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)),
dim = 1)

prediction = self.out(output)

Convolutional Seq2Seq

In attention calculating step in decoder:

#attention = [batch size, trg sent len, src sent len]
attended_encoding = torch.matmul(attention, (encoder_conved + encoder_combined))
#attended_encoding = [batch size, trg sent len, emd dim]

I think it should be

attended_encoding = torch.matmul(attention, encoder_combined)

encoder_combined has the embedded(of encoder) plus the conved (encoder's last layers output)
as mentioned in the paper.

The conditional inputclito the current decoder layer is a weighted sum of the encoder outputs as well as the input element embeddings ej.

Also in decoder step, we should do the residual connection before applying attention according to my understanding of the figure given in the paper. But in your code you make your residual connection after applying attention.

Please help me understand if I am missing something.

Lesson 3 (Attention Implementation)

Thanks for the great tutorials, this has really helped me understand lot of things in detail.

In the Encoder you put the hidden layer through FC layer with tanh activation. Is this expected behavior? In the paper I could see this is done to get the initial hidden state of the Decoder. But if you do this in the Encoder, you'll be calculating the energy in the attention layer in a different way than done in the paper. I might be wrong, any elaboration here will be really helpful.

Thanks in advance
AB

CUDA out of memory

Hi, i implement Packed Padded Sequences, Masking and interface, but I got some problem with cuda out of memory. My own dataset has about 50000 sentences and I already lower batch size to 1, but I still got RuntimeError with out of memory. Would you provide some solutions for me, thanks

Multi Layer Decoder

Hi,
Really good tutorial!
I had one doubt though, can you please explain how would you construct a multi layer decoder instead of a single layer. How would you pass the initial hidden state and how would you incorporate attention in all the layers of the decoder?

Thank You

How to generate new unseen translations

Hi,

Once again Thanks for the tutorial. Very Helpful.
I have a question on your tutorial about how do you get an actual translation when you have "finish" to train your model?
For example if I want to input a new unseen german phrase like "das ist das Ergebnis meines Modells.", how can I get "This is the result of my model"?
Because the Seq2Seq model always take the real translation as an input (src and trg)? How can I do if I don't have any target? Does I have to create a random one that won't be used in the evaluation?
And then, how you eventually turn back the tensor that you get in output because I don't see some kind of index2word dictionary that reverse the object src.build_vocab?

Thanks a lot for your answer.

lesson 3 attention

For the attention, why can't we just use a single linear layer for the v? Does nn.Parameters make any difference?

lesson1,calculate the loss between trg and ouput when traning

thanks for your tutorial ,it's really helpful!
but i find some wrong in lesson1,calculate the loss between trg and ouput when traning:
because you add <sos> and <eos> to trg,i think that you should calculate the loss between the data in the two blue box in the pic,so it should be trg[1:] and output[:-1],right?^-^
fullsizerender

notebook 5, padding in conv

In encoder, we include padding inside the definition of each nn.Conv1d,
self.convs = nn.ModuleList([nn.Conv1d(in_channels = hid_dim, out_channels = 2 * hid_dim, kernel_size = kernel_size, **padding** = (kernel_size - 1) // 2) for _ in range(n_layers)])
but in decoder, we don't
self.convs = nn.ModuleList([nn.Conv1d(hid_dim, 2*hid_dim, kernel_size) for _ in range(n_layers)])
and we work around this, with the following code:
#need to pad so decoder can't "cheat" padding = torch.zeros(conv_input.shape[0], conv_input.shape[1], self.kernel_size-1).fill_(self.pad_idx).to(device) padded_conv_input = torch.cat((padding, conv_input), dim=2)
Why do we follow different treatment, between those two cases?
Thanks again!

No way to generate output from models

The code of Seq2Seq models is dependent on the presence of target, and would not function when the target argument to Seq2Seq.forward is supplied as None. This makes code unsuitable for the usage in inference.

Code could be modified to work without target, but that requires modifying Seq2Seq code and providing additional arguments, as max target sequence length and , SOS character, ...

But without modification, it would be simply impossible to use the code for producing actual output when applying seq2seq models

Do we have to append <sos> and <eos> token to source sentence?

Hi, Ben Trevett
Now I'm training seq2seq with attention model based on your code

And I got curious, do we really need <sos> and <eos> tokens for our source sentence?
For target sentence, they absolutely are needed.

But I think, for source sentence, they are just useless tokens which prevent model from learning input tokens well.
I want to hear your opinion !

Thanks in advance

Why GRU instead of LSTM when adding attention?

Hi Ben! First of all thank you for this well explained and incremental seq2seq tutorials, they have been really useful.

I was concerned, why you used LSTMs for the 1st lesson, and after that GRU layers when adding attention to the architecture. Is it just for the computational efficiency of the GRU cells, so that the tutorial runs faster, or is there any other problem to add attention to the LSTM layers?

If LSTM seq2seq models are implementable with attetion, should attention be calculated using both hidden and cell states of the LSTM encoder?

Gorka

No tracking of other metrics than loss/perplexity

This one is a more conceptual issue: when developing seq2seq model (or neural network models in general), one should not rely only on loss to judge the quality of the model (unless it's a regression task).

In these examples, the usage of more appropriate metrics during the evaluation (for example, accuracy, if aiming for exact translation, BLEU score or Levenstein distance from the target) will be more illustrative for me and would show, whether the actual learning occured.

All Hidden Layer of Encoder

Can you please explain how did you get all hidden states of encoder? Because pytorch LSTM only returns last Hidden state (Context Vector). What I understand is that you have just repeated context vector for seq_len times in Attention Class (Correct me if I wrong) .

hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)

In Pytorch...
I am able to find out only one solution to get all hidden states of encoder that we can use LSTMCell for seq_len times. But its not very efficient. Is there any other way we can get all hidden states of encoder?

Thank You.

How to do a single prediction?

Hey, I am new to using deep learning frameworks. Does anyone know how to use the model (for example from the 1st notebook) to do a single prediction?

arguments are located on different GPUs

Thank you for your helpful tutorial.
I tried to use multi GPU to run Seq2Seq model which is implemented in the first tutorial by adding following lines to code
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model, device_ids=[0, 1], dim=1)
But I faced with this error "arguments are located on different GPUs", which is because of the Fully Connected layer in the Decoder class.
Did you ever run this model with multi GPU?
or do you think this is PyTorch bug?

About seq2seq tutorial 2

In the forward part of Class of Seq2Seq, why the encoder part does not have loop?

To my understanding, the encoder cell should take in an initial hidden state (usually all zeros), and the first input, and output the new hidden state to for the next word to be encoded. It should be like a loop with encoding length, if not included the End of Sequence token.

Thanks.

tutorial 4 [typo]

To use this model for inference, we simply pass a target sentence, trg, of None.
I think it should be or None

outputs[0] is the zeroes tensor initialized by torch.zeroes()

I have a question regarding the first element of the outputs (the prediction tensors).

So in the class Seq2Seq(nn.Module), outputs gets initialized by calling "outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)." And in the for loop followed right after the initialization, we are starting from the 1st idx of outputs and put predicted tensor in each timestamp in outputs[t].

I am not sure whether I have explained it well, but shouldn't the outputs[0] always correspond to the start token ? I am asking this because I have tried training the model and I am getting very weird output at outputs[0].

Question about the attention

Hello

I have a question about your attention code.

In the decoder code, you concatenate the embedded and the context.
Is this the way how the original paper did?
and if not, could you explain why you concatenated them?

    rnn_input = torch.cat((embedded, context), dim=2)
    
    #rnn_input = [1, batch size, (enc hid dim * 2) + emb dim]
        
    output, hidden = self.rnn(rnn_input, hidden.unsqueeze(0))

Used seq2seq as autoencoder, first token not predicted right

Hi! Thank you for this repo! I am using this to create an autoencoder (ie, map the input seq to the same output seq for sentences). However, the first output token is always predicted as the same thing, the first token in the dictionary. Has anyone ever run into this problem using this code, or have any tips on how to debug it? I appreciate it!

How to add some functionalities?

Hi. Please i would like to add some features to your code that i have read on beam search. Like coverage penalty and length normalization. But i don't know where to start. Can you help please?

On the output layer and use of LSTM in the decoder (Lesson 1)

Hi,

First, Thank you for these clear and illustrative notebooks, and I really appreciate your efforts.

I have two questions regarding the decoder in Lesson 1:

  1. The final linear layer of the decoder project the output of the LSTM output = [sent len, batch size, hid dim * n directions] to vocab size prediction = [batch size, output dim]. However, do we need a softmax layer to normalize the probability over the whole vocab and choose the max as the predicted word? Why can we directly take the max after the linear layer?

  2. In the forward method of the decoder LSTM, the input is self.rnn(embedded, (hidden, cell)), where #embedded = [1, batch size, emb dim] is a sequence of length 1. I understand that the forward prop of the decoder must be performed by each time step separately (as seen in the loop in the forward method of the Seq2Seq class). My question is that, would there be any difference if we use the LSTMCell pytorch class in the decoder? I am pretty sure they will do the same job.

Best,

Convolutional Sequence to Sequence

In both encoder and decoder we there multiple layers to convert emb_dim to hid_dim and vice versa. There are similar linear layers for attention too.

Can you please explain a little bit about why it is being used? The intuition behind it basically.

Predicting on One Sentence whose Target is Not Known using Convolutional Seq2Seq

Suppose we are translating from French to English.

So in rnn based encoder-decoder model, we initialize trg sentence as string of tokens.

Can you please explain how this will be executed in convolution based encoder-decoder module?

In rnn based model we feed the predicted token at time-step t-1 (th) to t (th) step in decoder to get the token at t (th) . I am little bit confused about how this will be done in convolution based decoder.

RuntimeError: Invalid device, must be cuda device

python3 seq2seq-nn.py
Number of training examples: 29000
Number of validation examples: 1014
Number of testing examples: 1000
{'src': ['.', 'büsche', 'vieler', 'nähe', 'der', 'in', 'freien', 'im', 'sind', 'männer', 'weiße', 'junge', 'zwei'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}
Unique tokens in source (de) vocabulary: 7853
Unique tokens in target (en) vocabulary: 5893
The model has 13,898,501 trainable parameters
Traceback (most recent call last):
File "seq2seq-nn.py", line 304, in
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
File "seq2seq-nn.py", line 222, in train
for i, batch in eiterator:
File "/home/vivien/.local/lib/python3.6/site-packages/torchtext/data/iterator.py", line 180, in iter
self.train)
File "/home/vivien/.local/lib/python3.6/site-packages/torchtext/data/batch.py", line 22, in init
setattr(self, name, field.process(batch, device=device, train=train))
File "/home/vivien/.local/lib/python3.6/site-packages/torchtext/data/field.py", line 187, in process
tensor = self.numericalize(padded, device=device, train=train)
File "/home/vivien/.local/lib/python3.6/site-packages/torchtext/data/field.py", line 316, in numericalize
arr = arr.cuda(device)
RuntimeError: Invalid device, must be cuda device

notebook 5, pos_embeddings, magic number 100

In encoder and decoder, we define pos_embeddings using size 100
self.pos_embedding = nn.Embedding(100, emb_dim)
Why we chose 100? Is it the maximum size of a sentence for all batches?

Decoding in conv seq2seq

In the convolutional sequence to sequence learning, the decoder input should be masked or the conv kernel should be casual. The model can not see future information when doing incremental decoding.

I have problems when dealing with the teacher forcing problems in conv seq2seq so I found this repo for help. Will the full inference function realized in the future? Thx~

forward function in encoder

Thank you for your nice tutorial.

I just have one question.

As I understand encoder should take input and previous hidden state, but your encoder only takes input data.

could you explain it?

why training on the WMT'14 dataset is so slow?

I try to train the WMT'14 dataset on 2080Ti . it can run just the batch size = 2 otherwise it will OOM,and the training is too slow,about 20 hours per epoch.i don't konw how to deal with this problem, Could you give me some advise?thanks

Notation in creating the context vector

Hi, thanks for your writing.

In the Neural Machine Translation by Jointly Learning to Align and Translate notebook, while creating the context vector Z you said the equation is
screen shot 2019-02-02 at 10 04 07 pm

Previously, for stacking you used the following

screen shot 2019-02-02 at 10 04 18 pm

Do you mean to take the final hidden state of forward(hT, computed after seeing input xt) and final backward hidden state(hT, computed after seeing input x1)
(or) final hidden states of forward(hT, computed after seeing input xt) and inital backward hidden state(h1, computed after seeing input xt)?

notebook 6, type error

while running notebook 6, i am getting this error. Please help to resolve this error.

Error :

`RuntimeError Traceback (most recent call last)
in ()
13 start_time = time.time()
14
---> 15 train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
16 valid_loss = evaluate(model, valid_iterator, criterion)
17

in train(model, iterator, optimizer, criterion, clip)
14 print ("----")
15 print (src.type(), trg[:,:-1].type())
---> 16 output = model(src, trg[:,:-1])
17 print (output.type)
18 print ("+++++")

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)

in forward(self, src, trg)
32 src_mask, trg_mask = self.make_masks(src, trg)
33
---> 34 enc_src = self.encoder(src, src_mask)
35
36 #enc_src = [batch size, src sent len, hid dim]

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)

in forward(self, src, src_mask)
31 pos = torch.arange(0, src.shape[1]).unsqueeze(0).repeat(src.shape[0], 1).to(self.device)
32
---> 33 src = self.do((self.tok_embedding(src) * self.scale) + self.pos_embedding(pos))
34
35 #src = [batch size, src sent len, hid dim]

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
106 return F.embedding(
107 input, self.weight, self.padding_idx, self.max_norm,
--> 108 self.norm_type, self.scale_grad_by_freq, self.sparse)
109
110 def extra_repr(self):

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1074 with torch.no_grad():
1075 torch.embedding_renorm_(weight, input, max_norm, norm_type)
-> 1076 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1077
1078

RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got CUDAFloatTensor instead (while checking arguments for embedding)
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.