Comments (12)
Hi @humanely, thanks for raising this issue!
Indeed, the issue here is that the script was written to use the tokenizer and image processor separately, and the respective objects are never tied together in a processor in the script.
I've opened a PR to address this - #30720
from transformers.
@humanely If you have a saved tokenizer and image processor, you can load them then create a processor, which you can then save out. This processor can then be loaded using the normal API:
from transformers import AutoProcessor, AutoImageProcessor, AutoTokenizer, CLIPProcessor
tokenizer = AutoTokenizer.from_pretrained(training_args.output_dir)
image_processor = AutoImageProcessor.from_pretrained(training_args.output_dir)
processor = CLIPProcessor(tokenizer=tokenizer, image_processor=image_processor)
# Save out the processor
processor.save_pretrained(training_args.output_dir)
# Now you have a processor you can load
new_processor = AutoProcessor.from_pretrained(training_args.output_dir)
from transformers.
Is there a reason for keeping CLIP based off Slow and not Fast?
I'm not sure where this assumption is coming from, a fast clip tokenizer exists.
from transformers.
Thanks @amyeroberts.
Is there an alternative for now? I tried loading legacy tokenizers(with merges and vocab file), which works fine.
But loading tokenizer.json(FAST version) doesn't. The error is:
ValueError: The
backend_tokenizerprovided does not match the expected format. The CLIP tokenizer has been heavily modified from transformers version 4.17.0. You need to convert the tokenizer you are using to be compatible with this version.The easiest way to do so is
CLIPTokenizerFast.from_pretrained("path_to_local_folder_or_hub_repo, from_slow=True). If you want to use your existing tokenizer, you will have to revert to a version prior to 4.17.0 of transformers.
from transformers.
I get this error with this approach.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/prabhatkr/miniforge-pypy3/envs/clip/lib/python3.12/site-packages/transformers/models/clip/processing_clip.py", line 59, in __init__
super().__init__(image_processor, tokenizer)
File "/home/prabhatkr/miniforge-pypy3/envs/clip/lib/python3.12/site-packages/transformers/processing_utils.py", line 96, in __init__
raise ValueError(
ValueError: Received a PreTrainedTokenizerFast for argument tokenizer, but a ('CLIPTokenizer', 'CLIPTokenizerFast') was expected.
Also tried with GPT2 Tokenizer. And a smilar error occured.
>>> from transformers import GPT2Tokenizer
>>> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
>>> loc="/Users/home/clip-model"
>>> from transformers import AutoProcessor, AutoImageProcessor, AutoTokenizer, CLIPProcessor
>>> image_processor = AutoImageProcessor.from_pretrained(loc)
>>> processor = CLIPProcessor(tokenizer=tokenizer, image_processor=image_processor)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/models/clip/processing_clip.py", line 59, in __init__
super().__init__(image_processor, tokenizer)
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/processing_utils.py", line 96, in __init__
raise ValueError(
ValueError: Received a GPT2Tokenizer for argument tokenizer, but a ('CLIPTokenizer', 'CLIPTokenizerFast') was expected.
from transformers.
This worked. But I am not sure if it is valid. huggingface/tokenizers#521
Basically, load the tokenizer json and save as legacy.
tokenizer = Tokenizer.from_file("byte-level-bpe.tokenizer.json)
tokenizer.model.save(output_dir)
But this only gives vocab.json and merges.txt. To get 2 more files of special_tokens_map.json, tokenizer_config.json;
Load the tokenizer json in Autotokenizer and save in a separate directory. Only use the necessary 2 files.
Is this correct approach in your opinion? @amyeroberts
Also, I would suggest to upgrade the CLIPTokenizer class to use the Fast and new type of tokenizers out of the box.
from transformers.
Some more issues of legacy exists in Processor.
inputs = processor(text=["cat","dog"], images=image, return_tensors="pt", padding=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/models/clip/processing_clip.py", line 106, in __call__
encoding = self.tokenizer(text, return_tensors=return_tensors, **tokenizer_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2858, in __call__
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2944, in _call_one
return self.batch_encode_plus(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 3135, in batch_encode_plus
return self._batch_encode_plus(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 504, in _batch_encode_plus
encodings = self._tokenizer.encode_batch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: Unk token `<|endoftext|>` not found in the vocabulary
Whereas, the tokenizer has it:
"added_tokens": [
{
"id": 0,
"content": "[UNK]",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": false,
"special": true
},...
Legacy tokenizers have the <|endoftext|>, but not in new types.
from transformers.
I get this error with this approach.
@humanely Ah, yes, I was assuming you were using a CLIPTokenizer. If you want to bundle together a tokenizer and a image prcoessor which aren't for CLIP, you can't bundle them together into a CLIPProcessor, as it assume CLIP objects are being used.
I closed my previous PR as I realised I was making the assumption the script was just for CLIP. However, looking at the script README.md I see that the idea is any vision and language models can be combined i.e. any tokenizer and image processor could be used
from transformers.
@humanely There's a few different questions here. Without knowing exactly what you're trying to do, and a full reproducer, it's hard to help debug or give advice. Just from what I gather in the comments:
Basically, load the tokenizer json and save as legacy.
Why save it as legacy?
Load the tokenizer json in Autotokenizer and save in a separate directory. Only use the necessary 2 files.
Yes, the recommended way to save and load the tokenizers is using AutoTokenizer
and the save_pretrained
and from_pretrained
arguments
I'm not sure what you mean by 'save in a separate directory` (separate from what?) and 'only use the necessary 2 files'/
Also, I would suggest to upgrade the CLIPTokenizer class to use the Fast and new type of tokenizers out of the box.
The CLIPTokenizer class is for the slow tokenizer. CLIPTokenizerFast is for the fast tokenizer. AutoTokenizer will correctly load the fast by default if it exists and can be loaded in the environment, otherwise it will load the slow tokenizer.
Legacy tokenizers have the <|endoftext|>, but not in new types.
Which "legacy" tokenizers are we talking about here?
Where a tokenizer has <|endoftext|>
will depend on its vocabulary. Some tokenizers will have it, some will not. This will be a modelling decision.
from transformers.
My bad. I read the docs and understood that CLIPTokenizer seemed generic or bivalent. Is there a reason for keeping CLIP based off Slow and not Fast?
from transformers.
Thanks
from transformers.
Is there a documentation of how to train a Fast clip tokenizer?
I built one which works fine as pretrained fast one. But fails to work as CLIP. I have attached the tokenizer file. It is for Sanskrit language.
t=PreTrainedTokenizerFast(tokenizer_file="sa-bpe-tokenizer-v1.4.json")
t.decode(t.encode("एकः बालकः धावति"))
'एकः बालकः धावति'
Firstly, I am unable to load this tokenizer as CLIP. Even if I generate vocab and merges and load as CLIP, the encoder only generates UNK.
#Save vocab and merges in sa-bpe-tokenizer-v1.4 folder.
c=CLIPTokenizerFast.from_pretrained("sa-bpe-tokenizer-v1.4", from_slow=True)
c.decode(c.encode("एकः बालकः धावति"))
'<|startoftext|>ए�[UNK]�[UNK]�[UNK]�[UNK]ल�[UNK]�[UNK]�[UNK]�[UNK]व�[UNK]�[UNK]<|endoftext|>'
This is not a CLIPTokenizer issue. The process to get vocab.json and merges files from the attached tokenizer JSON seems to be wrong. Can someone help in converting this FAST tokenizer to a compatible CLIP tokenizer?
Or, is there a way to build the CLIP tokenizer from scratch?
TIA
from transformers.
Related Issues (20)
- Tensor not on the same device in finetuning a OPT model with `device_map=auto` HOT 3
- sdpa for bert causes nan output when mix-precision enabled. HOT 3
- What if past_key_values is in model_kwargs but is None HOT 2
- isin() received an invalid combination of arguments HOT 7
- Add option to only install AutoTokenizer for production environment
- Question about model training with checkpointing: HOT 1
- Paligemma model Forward Method Not Returning Loss in Trainer HOT 4
- Whether the OutEffHop can support with Transfomers HOT 1
- Add sanity validation steps HOT 5
- Checkpoint saving by different evaluation criterias HOT 1
- Token Classification Pipeline support for Layout models HOT 2
- GLUs misleadingly named MLP HOT 6
- TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. HOT 3
- Python suffix stripping in `transformers.dynamic_model_utils.get_class_in_module` strips additional characters HOT 2
- Sigmoid instead of softmax used in Documentation and autopipeline for siglip HOT 8
- AutoModel not loading model correctly due to config_class inconsistency HOT 7
- Llama-2 output from the forward function is nonsense. Output from `.generate()` is okay HOT 2
- GenerationConfig throws Object is not JSON serializable when setting constraints HOT 3
- from transformers import Phi3VModel, Phi3VConfig. Phi3 Vision config not added to transformers. HOT 1
- Add ability to specify input device for ffmpeg_microphone() HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformers.