Comments (8)
cc @ydshieh
from transformers.
I can no longer reproduce this after restarting my notebook kernel. I suppose that the pytorch detection relies on some system environment which requires notebook kernel to restart to pickup. Feel free to close this.
from transformers.
It seems that _torch_available is a global variable and set at initialization time: https://github.com/huggingface/transformers/blob/02300273e220932a449a47ebbe453e7789be454b/src/transformers/utils/import_utils.py#L180C1-L180C17
So it won't be re-evaluated in the same notebook kernel when it was re-evaluated (until I restart my kernel). One improvement I can think of is to reevaluate this variable in is_torch_available() but I don't know if that breaks other things or not.
from transformers.
So it won't be re-evaluated in the same notebook kernel when it was re-evaluated (until I restart my kernel).
Does this mean that torch
is installed in the notebook after importing transformers?
from transformers.
In general, for notebook (well, I usually use Google Colab), if there is/are some installation(s), it is recommended to restart the kernel.
reevaluate this variable in is_torch_available()
We make use of _torch_available
is (probably) and similar ideas for many other is_xxx_available
is we try to avoid re-evaluation which might slowdown a lot.
I am open to a PR that improve stuff (the mentioned issue) but keep things fast (regarding what I mentioned above)
from transformers.
@amyeroberts torch is installed in the notebook kernel ("!pip install ...") and I re-executed the block to import transformer multiple times (with no avail).
from transformers.
orch is installed in the notebook kernel ("!pip install ...") and I re-executed the block to import transformer multiple times (with no avail).
@dannikay Just to be clear, this means that transformers was already imported, torch installed, then the cell to import transformers re-executed?
As @ydshieh notes above, we have a cache which means is_torch_available
is executed once and its result stored within a python session. This helps speed things up within the transformers library - we have lots of is_xxx_available
flags which enable us to safely guard for different framework and modality usage e.g. PyTorch vs TensorFlow.
If you or anyone else wants to submit a PR which would make this more dynamic whilst maintaining speed, we'd be very happy to review!
from transformers.
@amyeroberts correct.
from transformers.
Related Issues (20)
- 如果在单个GPU上out of memory 如何用两个GPU加载推理同一个模型? HOT 3
- Check diff files in `check_copies`
- from_pretrained 加载checkpoint过慢的问题 HOT 1
- LLM during inference do not deallocate memory
- NVMLError_NotSupported when creating Trainer() object. HOT 2
- Stopping criteria not working with \n
- GGML (GGUF) Llama3 unit test fails HOT 2
- Error on fine tuning paligemma for object detection HOT 7
- Potential Bug in llava_next when calling pack_image_features function. HOT 5
- Source link to `LlamaForSequenceClassification` seems broken, if so, update it. HOT 3
- Process hangs when evaluating the model before finishing an epoch using `accelerate` in a multi-GPU environment (no trainer). HOT 4
- HuggingFace GroundingDINO inference execution time is slower than the original groundingDINO (~100ms) HOT 2
- Batch Generation giving different output when using batch size > 1 or when using padding in MambaForCausalLM HOT 2
- gh: consider `i18n` HOT 2
- Nested from_pretrained() gives warnings loading weights - "copying from a non-meta parameter" HOT 1
- Problem with the masked language modeling tutorial HOT 1
- When running `ruff format src/transformers`, some files needs to be reformatted HOT 2
- Something wrong for `StoppingCriteria` HOT 5
- Index out of range when generate using optimum HOT 1
- Fail to load model without .safetensors file
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.