Comments (8)
Thanks for your reminder, and sorry for this confusion.
We revise this in our newest arXiv version.
Specifically, the Adapter is 0.16M, LoRA is 0.29M, VPT is 0.64M, and NOAH is 0.43M.
For a fair comparison, we will add the VTAB performance of the 4X8 dim Adapter in the newest version. Please stay tuned.
from noah.
Thanks a lot for your reply.
BTW it seems that the lib
folder has not been uploaded yet, which causes an error when running the scripts.
from noah.
Thanks a lot for your reply. BTW it seems that the
lib
folder has not been uploaded yet, which causes an error when running the scripts.
Yes, Shibo.
Actually, we are cleaning our code in the lib folder these days. We will upload this folder this week, please stay tuned.
from noah.
Hi, thanks again for sharing the code.
I successfully reproduced most of the results about VTAB-1K in the paper except for Retinopathy.
I ran the following commands
DATASET=diabetic_retinopathy
#adapter
python supernet_train_prompt.py --data-path=../vtab-1k/${DATASET} --data-set=${DATASET} --cfg=./experiments/Adapter/ViT-B_prompt_adapter_8.yaml --resume=../ViT-B_16.npz --output_dir=./saves/${DATASET}_lr-0.001_wd-0.0001_adapter --batch-size=64 --lr=0.001 --epochs=100 --is_adapter --weight-decay=0.0001 --no_aug --mixup=0 --cutmix=0 --direct_resize --smoothing=0 --launcher="none"
#lora
python supernet_train_prompt.py --data-path=../vtab-1k/${DATASET} --data-set=${DATASET} --cfg=./experiments/LoRA/ViT-B_prompt_lora_8.yaml --resume=../ViT-B_16.npz --output_dir=./saves/${DATASET}_lr-0.001_wd-0.0001_lora --batch-size=64 --lr=0.001 --epochs=100 --is_LoRA --weight-decay=0.0001 --no_aug --mixup=0 --cutmix=0 --direct_resize --smoothing=0 --launcher="none"
#noah
python supernet_train_prompt.py --data-path=../vtab-1k/${DATASET} --data-set=${DATASET} --cfg=experiments/NOAH/subnet/VTAB/ViT-B_prompt_${DATASET}.yaml --resume=../ViT-B_16.npz --output_dir=saves/${DATASET}_supernet_lr-0.0005_wd-0.0001/retrain_0.001_wd-0.0001 --batch-size=64 --mode=retrain --epochs=100 --lr=0.001 --weight-decay=0.0001 --no_aug --direct_resize --mixup=0 --cutmix=0 --smoothing=0 --launcher="none"
and got
{"train_lr": 1.0976769428005575e-05, "train_loss": 0.053501952129105725, "test_loss": 1.5837794360286461, "test_acc1": 71.1038200165646, "test_acc5": 100.0, "epoch": 99, "n_parameters": 160613}
{"train_lr": 1.0976769428005575e-05, "train_loss": 0.02753125037997961, "test_loss": 2.0185347567061465, "test_acc1": 67.27443168466701, "test_acc5": 100.0, "epoch": 99, "n_parameters": 298757}
{"train_lr": 1.0976769428005575e-05, "train_loss": 0.042937366478145125, "test_loss": 1.6862158768191309, "test_acc1": 69.75392547600269, "test_acc5": 100.0, "epoch": 99, "n_parameters": 8462261}
Did I do something wrong?
from noah.
Hi Shibo,
I want to know if the shown test_acc1 is the best accuracy OR the accuracy of the final checkpoint?
from noah.
The final checkpoint
from noah.
Ok, we report the best accuracy.
from noah.
Thank you.
from noah.
Related Issues (20)
- Clarification on Fewshot results in the paper HOT 4
- About the slurm setting HOT 14
- Table results from the last model or the best model? HOT 4
- caltech dataset HOT 7
- CONFIG = $1, where to find config? HOT 16
- One question about experiments on domain generalization. HOT 4
- Searched configurations in the main table HOT 1
- How to plot the Figure 1(b)? HOT 1
- What is the difference between the used ViT-B/16 weights and the weights provided by timm? HOT 1
- error in lib/config.py HOT 3
- Question about Adapter HOT 1
- Questions about reproduction. HOT 6
- Few-shot results HOT 1
- Regarding data pipeline of ImageNet HOT 3
- Problem preparing RESISC45 and Diabetic Retinopathy dataset HOT 3
- Only gain 64.39 on cifar100 using VPT HOT 3
- dataset HOT 12
- About reproduction. HOT 4
- Different limit HOT 1
- Search space configuration of VTAB task
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from noah.