Comments (5)
@M-Fannilla this one isn't related to the model checkpoint, but rather to the installed packages you have. Seems like flash-attn wasn't installed properly or has some dependency issues. Try to uninstall flash-attn and load again
If it doesn't work feel free to open a new issue :)
from transformers.
@M-Fannilla hi, I just updates llava-weights in the hub which caused the error. I will revert the changes soon
from transformers.
@zucchini-nlp Great, Thanks!
from transformers.
@M-Fannilla should be working now, closing the issue as resolved!
from transformers.
There is new issue:
Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
/usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops5zeros4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalINS2_10ScalarTypeEEENS6_INS2_6LayoutEEENS6_INS2_6DeviceEEENS6_IbEE
I did not have this one before.
from transformers.
Related Issues (20)
- xpu device is not used running pipeline(device_map="auto") HOT 3
- Allow additional keyword args to be passed to optuna hyperparameter search
- Is there any way to update the parameters of embedding model? HOT 2
- A bug that may cause device inconsistency HOT 4
- Gemma 2 Inference with BF16 fails HOT 5
- cuda device is wrongly requested instead of xpu running pipeline(device_map="auto", max_memory": {0: 1.0e+10}) HOT 5
- Incorrect Whisper long-form decoding timestamps HOT 4
- Very different output depending on whether an attention mask is passed when using caching HOT 3
- `last_hidden_state` has a different shape than `hidden_states[-1]` in the output of `SeamlessM4Tv2SpeechEncoder` if adapter layers are present HOT 6
- [GroundingDino] - GroundingDinoProcessor kwargs is Broken HOT 2
- Flash Attention with Gemma 2 HOT 11
- FX tracer doen't work when requesting non-default input argument HOT 2
- Keep Tuple of past key values as an option HOT 9
- How to manually stop the LLM output? HOT 2
- Pipeline's "num_return_sequences" > greater than 1 causes a runtime error with Gemma-2-9B. HOT 6
- WavLM returns empty hidden states when loaded directly to GPU HOT 1
- "TypeError: Object of type device is not JSON serializable" when saving the model on TPU HOT 6
- Add Depth Anything v2 metric depth HOT 6
- `attention_mask` must be in the same device as model? HOT 1
- `Gemma2Model` not returning cache HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.