Comments (16)
Thanks for your interest.
configs are located here: https://github.com/ZhangYuanhan-AI/NOAH/tree/main/experiments/VPT
from noah.
https://github.com/ZhangYuanhan-AI/NOAH/tree/main/experiments/VPT
thx for the quick reply, i will look into it
from noah.
Enjoy
from noah.
Enjoy
😆
i have a question, it seems that VPT has different optimal hyperparameters for different datasets, but it seems that the folder provided does not have anything specific to a dataset?
i was wondering if your project could reproduce the results in VPT or simply it just could reproduce the results in NOAH?
thx again.
from noah.
We can only achieve on-par performance with VPT and reproduce the results in NOAH.
from noah.
We can only achieve on-par performance with VPT and reproduce the results in NOAH.
i understood, so if i want to achieve on-par perf with VPT, i should follow the hyperparams in SUPERNET only? since --mode default value is super, the SEARCH_SPACE and RETRAIN hyperparams won't have any effect, right?
from noah.
exactly
from noah.
exactly
is it possible to run your code w/o slurm?
from noah.
Enjoy
😆 i have a question, it seems that VPT has different optimal hyperparameters for different datasets, but it seems that the folder provided does not have anything specific to a dataset?
i was wondering if your project could reproduce the results in VPT or simply it just could reproduce the results in NOAH?
thx again.
by the way, your config is not dataset specific? is this not necessary?
from noah.
from noah.
thx, i will have a try.
from noah.
- Since currently, VPT does not report its optimal learning rate and weight decay and only reports its optimal prompt length in Table. 13, we do use the optimal prompt length to reproduce performance in VTAB and achieve the on-par performance though still report VPT's results in this table (a little bit better than our reproduction).
- To exactly reproduce VPT's results, you could do an exhaustive hyperparameter search(Table 7. in VPT). However, the exhaustive hyperparameter search requires a huge computing resource (240 experiments for a single dataset). Therefore, we cannot fully follow their way to do the all experiments, and we hope you can understand. If you want to do so, just modify the parameter in the config file.
- Noted that all the results in NOAH are based on the same experimental setting---same learning rate, and weight decay---for a fair comparison. Therefore, though we do not do the exhaustive hyperparameter search, we still think our result is reasonable and convinced, what do you think?
@zhaoedf
from noah.
thx a lot!
i recently conduct a grid search as VPT proposed and you are right that it is really time-consuming.
Thus, when i learnt that you achieve on-par performance with VPT, i thought you might have conducted a grid search or sth to find the optimal hyperparams.
since you have not, maybe i will continue my grid search as well as using your code to get results.
would you mind if i post my grid search results here in this issue? cos it seems you are one of the few repos that have VPT reproduced and our discussion might help others.
from noah.
thx a lot!
i recently conduct a grid search as VPT proposed and you are right that it is really time-consuming.
Thus, when i learnt that you achieve on-par performance with VPT, i thought you might have conducted a grid search or sth to find the optimal hyperparams.
since you have not, maybe i will continue my grid search as well as using your code to get results.
would you mind if i post my grid search results here in this issue? cos it seems you are one of the few repos that have VPT reproduced and our discussion might help others.
Sure! maybe it is better if we can talk about the grid search result through e-mail at first, and post the final result after we double-check the results. Otherwise, this post might be too long for others to catch the most useful information. @zhaoedf This is my e-mail: [email protected] feel free to chat.
from noah.
ok!
from noah.
does get_vtab1k.py simply generate .txt files under tensorflow env and after this, it is easy to use your lib.dataset to get dataset with the txt files and original images under torch env?
would you mind if you send me these txt files, cos i am in mainland china, accessing google api is not that convenient. thx.
from noah.
Related Issues (19)
- About #params of Adapter HOT 8
- Searched configurations in the main table HOT 1
- How to plot the Figure 1(b)? HOT 1
- What is the difference between the used ViT-B/16 weights and the weights provided by timm? HOT 1
- error in lib/config.py HOT 3
- Question about Adapter HOT 1
- Questions about reproduction. HOT 6
- Few-shot results HOT 1
- Regarding data pipeline of ImageNet HOT 3
- Problem preparing RESISC45 and Diabetic Retinopathy dataset HOT 3
- Only gain 64.39 on cifar100 using VPT HOT 3
- About GPU consumption HOT 1
- lib folder is not uploaded to the main repository HOT 1
- Clarification on Fewshot results in the paper HOT 4
- About the slurm setting HOT 14
- Table results from the last model or the best model? HOT 4
- caltech dataset HOT 7
- One question about experiments on domain generalization. HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from noah.