Comments (6)
It seems that the checkpoint is saved as a DDP module, but you tried to load it into a pure encoder.
This can be the solution.
ckpt = torch.load(load_path, map_location=torch.device('cpu'))
pretrained_dict = ckpt['encoder']
# -- loading encoder
for k, v in pretrained_dict.items():
encoder.state_dict()[k[len("module."):]].copy_(v)
from ijepa.
Thank you so much @CUN-bjy, it worked!
However, I couldn't classify the images due to limited computing resources.
Thanks again for your help!
from ijepa.
Hello everyone.
Any insights as to how one can take a checkpoint/pretrained model and use it for some downstream task? As in, load the already trained weights into a model, freeze them and use this to train a classifier for another dataset (e.g. CIFAR 10).
Also, what is the complete answer to the question posed above? For example, where does the encoder
variable come from? A complete code snippet would be of great help.
I've figured the steps for loading the checkpoint are the following:
- Take the state_dict
- Initialize the corresponding ViT (e.g. ViT-H with the
init_model
function fromsrc.helper.py
) - Initialize an optimizer with
init_opt
- Then? Which parts of the IJEPA architecture are needed to utilize the embeddings in some other task as described earlier?
This is for research purposes by myself, an undergrad.
Thank you in advance!
from ijepa.
Hello everyone. Any insights as to how one can take a checkpoint/pretrained model and use it for some downstream task? As in, load the already trained weights into a model, freeze them and use this to train a classifier for another dataset (e.g. CIFAR 10).
Also, what is the complete answer to the question posed above? For example, where does the
encoder
variable come from? A complete code snippet would be of great help.I've figured the steps for loading the checkpoint are the following:
- Take the state_dict
- Initialize the corresponding ViT (e.g. ViT-H with the
init_model
function fromsrc.helper.py
)- Initialize an optimizer with
init_opt
- Then? Which parts of the IJEPA architecture are needed to utilize the embeddings in some other task as described earlier?
This is for research purposes by myself, an undergrad.
Thank you in advance!
This would be great to have a solution on if someone has managed to get it working!
from ijepa.
Hello everyone. Any insights as to how one can take a checkpoint/pretrained model and use it for some downstream task? As in, load the already trained weights into a model, freeze them and use this to train a classifier for another dataset (e.g. CIFAR 10).
Also, what is the complete answer to the question posed above? For example, where does the
encoder
variable come from? A complete code snippet would be of great help.I've figured the steps for loading the checkpoint are the following:
- Take the state_dict
- Initialize the corresponding ViT (e.g. ViT-H with the
init_model
function fromsrc.helper.py
)- Initialize an optimizer with
init_opt
- Then? Which parts of the IJEPA architecture are needed to utilize the embeddings in some other task as described earlier?
This is for research purposes by myself, an undergrad.
Thank you in advance!
You can take pretrained Target Encoder
and finetune on your custom datasets. But finetuning would be costly as you can see from the size of encoder: It has 32 blocks as Vit based models require lot of data to be tuned for the task at hand. Also GPU requirement is higher. One possibility would be training a MLP (1 layer, 2 layers, ....N layers) on top the encoder for task of interest.
Possible downstream tasks would be image similarity, classification, etc. Feature extraction is the main component, you can use it anywhere!
from ijepa.
Hello everyone. Any insights as to how one can take a checkpoint/pretrained model and use it for some downstream task? As in, load the already trained weights into a model, freeze them and use this to train a classifier for another dataset (e.g. CIFAR 10).
Also, what is the complete answer to the question posed above? For example, where does the
encoder
variable come from? A complete code snippet would be of great help.I've figured the steps for loading the checkpoint are the following:
* Take the state_dict * Initialize the corresponding ViT (e.g. ViT-H with the `init_model` function from `src.helper.py`) * Initialize an optimizer with `init_opt` * Then? Which parts of the IJEPA architecture are needed to utilize the embeddings in some other task as described earlier?
This is for research purposes by myself, an undergrad.
Thank you in advance!
I have developed a fine-tuning code for the I-JEPA here very based on the ViT-MAE in order to reproduce the experiments conducted here right now it's seeming to work, as the loss is decreasing, but I'm not managing to get much reduction on the test error so I am currently investigating that. If you need help contact me on discord (at falsomoralista) or something.
from ijepa.
Related Issues (20)
- Torch version HOT 1
- All blocks have same size within a batch HOT 2
- config files about vit-small and vit-base HOT 3
- Training loss increases HOT 3
- Error in interpolation of pos_embedd when using data of different dimension
- RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. HOT 2
- I Just download the image1k dataset from huggingface, what should i do next to process the tar.gz file and train this model HOT 3
- Is there any visual code?
- How to evaluate the performance of pre-trained model?
- Struggling to replicate evaluation results HOT 14
- An error was reported while training on multiple Gpus HOT 1
- Difficulty continue self supervised pre training on custom dataset HOT 2
- Downstream task HOT 5
- Image resolution & folder structure for unsupervised pre-training HOT 1
- imagenet1k Huggingface extraction HOT 1
- Training from scratch HOT 12
- generative model decoder for target-encoder
- what is the difference between root_folder and image_folder? HOT 1
- Struggling to Train Downstream Classifier HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ijepa.