Comments (6)
hey! sorry not sure to understand, the dropout should not work with model.eval()
no?
from transformers.
Hi @ArthurZucker!
Yes, this is correct, the thing that I was pointing out is that during training the dropout acting on the attention weights is also modifying them using the inverse of the dropout probability.
On a standard layer, I understand that this makes it so that the output is similar in magnitude during training and evaluation.
However, in an attention layer, this causes the attention weights to not sum to 1.
In turn, during inference, since the dropout is ineffective, the attention weights do sum to 1, and thus there is this discrepancy between train and test that I think can cause some troubles.
It is like the network is always making inferences on slightly out-of-distribution samples.
Not sure if I explained it better now!
from transformers.
That's for sure, but all models are trained that way 😄
I never thought about this, but dropout in general would be bad for inference, feel free to do some benchmarks I am curious!
from transformers.
Yes, I have also noticed this problem and put together a notebook to demonstrate what is happening. https://colab.research.google.com/drive/10f5pqC4XO5grmP1soT-Yh12-JOFg_i3w?usp=sharing
Due to dropout's behaviour in training, it will scale up the softmax outputs. This causes probabilities to be less than or greater than 1.0 and not exactly 1.0 during training, whereas at test/inference time, this behaviour of dropout is not seen because dropout becomes a no-op during inference, and all probabilities add up to 1.0. I think this might be a problem, but I haven't seen it addressed systematically anywhere. This is not new and has been discussed before on the PyTorch forums: pytorch/pytorch#42929
The way I'd solve this is apply dropout before running softmax so that after softmax, the probabilities add up to 1.0.
from transformers.
TBH if you want you can open a PR to see if this improves performances of let's say Llama3 on MMLU for example! That would be relevant to say wether or not this has potential impact!
from transformers.
TBH if you want you can open a PR to see if this improves performances of let's say Llama3 on MMLU for example! That would be relevant to say wether or not this has potential impact!
Just changing the code and running inference probably won't help and will most likely make things worse since the model was trained in a specific way and inference should try to keep that the same. In my mind the only? way to actually test this theory is to train 2 models and compare them on specific benchmarks. I lack the GPU resources to do so though. @ArthurZucker if you have some resources I'm happy to send a PR and you could help me validate this?
from transformers.
Related Issues (20)
- when i update transformers from 4.38.1 to version==4.42.0, it happened that Failed to import trl.models.modeling_base because of the following error (look up to see its traceback): '>' not supported between instances of 'NoneType' and 'str' HOT 1
- Mamba-2 Exploding Gradients HOT 14
- [whisper] setting `prompt_condition_type="all-segments"` results in generation errors when `prompt_ids` is set HOT 1
- PreTrainedTokenizerFast `char_to_token` `token_to_char` not working as expected
- Llama3 Tokenizer Decode Removing Space Character HOT 1
- Gradient checkpointing warning HOT 1
- Gemma2 GGUF: `modeling_gguf_pytorch_utils.py: ValueError: Architecture gemma2 not supported` HOT 3
- [Bug] RT-DETR post-processing yields incorrect results when use_focal_loss=False HOT 1
- apply_rotary_pos_emb() Tensor size mismatch HOT 6
- Invalid attention causal mask within the Llama model ?
- Shape mismatch when generating with multiple processes HOT 2
- [RT-DETR] No norm freezing for R18 HOT 6
- Significant Performance Discrepancy Between Single-GPU and Multi-GPU Training with BERT HOT 6
- End-to-end generation compile cannot work !!! HOT 1
- Static Cache is broken with multi-gpu inference HOT 2
- Docker container with development environment for Transformers library HOT 3
- PaliGemmaForConditionalGeneration example script not working (AutoProcessor) HOT 4
- Removing last element of class_queries_logits is not appropriate when do_reduce_labels is set to false. HOT 1
- Out of memory when using phi3 for token classification HOT 1
- Odd output from falcon-mamba-7b on mps device HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.