Giter Site home page Giter Site logo

openssl-simcore's Issues

Training time cost

Hi, thanks for sharing this impressive work! I would like to know what is the specific GPU you used in experiments, like 3090 or v100? And how long it takes to finish the pre train process on X and OS+X?

Results problem on aircraft dataset

Hi, I have read your paper repeatedly and that is a good job. At present, I am trying to replicate your results. However, I have encountered some problems in the process of replicating your results, which are as follows:
Following your experimental settings, I first conducted ssl training for Aircraft(train) by using simclr method. Then, test its performance on X(Aircraft test) using run_selfup.sh. However, the test results on X(Aircraft) are only [20.55, 28.32, 90.25](Acc@1, Acc@5, Train Acc).
There is a clear difference between this result and the paper.
Could you possibly give some suggestions as to the cause of this gap?
The specific training parameters I use are as follows:

TAG="aircraft_vanilla"
DATA="aircraft"

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_selfsup.py --tag $TAG \
    --no_sampling \
    --model resnet50 \
    --batch_size 512 \
    --precision \
    --dataset $DATA \
    --data_folder ./data \
    --method simclr \
    --epochs 5000 \
    --cosine \
    --learning_rate 1e-1 \
    --weight_decay 1e-4

Dataset split in SSL and evaluation

Hello, thanks for the great work!

Just wanted to check something:

  1. In the appendix it's said that "we incorporate the train and validation set of the original benchmarks [...] to enlarge the number of training samples".
  2. In the paper itself, Table 2 shows the size of the training set.

So when doing SSL and then evaluating just on aircraft for example (the 46.56% value), you use all 10k samples in SSL, and in table 2 you report the accuracy on the training set after training the linear classifier? Thanks in advance!

About the result of directly pretrained on target fine-grained datasets.

Hi, I have read your paper and that is a great work. But I faced some problems when pretraining on fine-grained datasets. I chose BYOL with resnet50 and Cars datasets. I set batchsize=2048, epoch=1000, cosine lr schedule with lr=1e-1, and weight decay=1e-4. The loss has decayed to 0.1 but the linear evaluation (fixed backbone) result is far away from 50%. I am confused and hope to know what're suitable settings, or have I missed the experiments' details?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.