Comments (2)
My only intuition is the same as the paper, that is, reversing the order of the input sentence reduces the path length of information between the beginning of the two sentences.
If you note, tutorials 2 and 3 do not reverse their input sentences. Potentially because their improved architecture/models were able to cope with this longer path of information? Although that is just a guess and I don't think I tried those models with reversed input sentences.
I don't believe I've seen a study on what languages benefit from reversing input sentences, it's definitely something that could do with researching.
Have you tried comparing the reversed and original results across a few random seeds to make sure you consistently get better results for the original order?
from pytorch-seq2seq.
I just realised that in my data (Japanese-English), the last word is the verb, which is quite important information from the sequence. Probably that's why reversing the order does not really work since the important information would be further separated then.
Have you tried comparing the reversed and original results across a few random seeds to make sure you consistently get better results for the original order?
Will do!
Thanks again!
from pytorch-seq2seq.
Related Issues (20)
- Thank you! HOT 1
- [Bug] Tranformer Seq2Seq Have Wrong Inputs! HOT 2
- Custom Text Dataset HOT 6
- Question
- torchtext recent version (0.12.0) doesn't support Field, BucketIterator HOT 4
- Question about how to resolve the out of vocabulary problem during encoding and decoding in tutorial 1
- Possible Inaccuracies in training script
- Tutorial 6: [Attention is All You need] Different output at different batch size during Inference
- Question about changing params init from xavier to kaiming
- Transformer ScaledDotProductAttention energy value on 16-bit Precision. HOT 3
- Using pretrained BERT embedding
- Why using tanh function HOT 3
- How do you make this work on android?
- Notebook 1 <eos> problem. HOT 2
- no module named 'torchtext.legacy' HOT 2
- import
- possible opposite explanation of hidden compared to output in notebook #3
- Seq2seq: Input not matching Output (and big thanks)
- How to change seq2seq to graph2seq
- Incorrect German Translation
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-seq2seq.