Giter Site home page Giter Site logo

moco-v3's Issues

MOCO V3 vit_small error: object has no attribute "num_tokens"

When I attempt to pre-train moco v3's vit_small model, I run into the following bug:

raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'VisionTransformerMoCo' object has no attribute 'num_tokens'

After changing the line
vits.py-line-66-LINK to
assert self.num_prefix_tokens == 1, 'Assuming one and only one token, [cls]' I don't see the bug anymore. It seems like the base class timm.models.vision_transformer has an argument named num_prefix_tokens but not num_tokens and hence vit_small is erroring out at the above mentioned line.

The command I used to run the code is:
python main_moco.py \ -a vit_small -b 1024 \ --optimizer=adamw --lr=1.5e-4 --weight-decay=.1 \ --epochs=400 --warmup-epochs=40 \ --stop-grad-conv1 --moco-m-cos --moco-t=.2 \ --dist-url 'tcp://localhost:8080' \ --multiprocessing-distributed --world-size 1 --rank 0 \ /data/

Please let me know if this is an accurate fix, or if I missed something. Thanks in advance!

KNN curve code

Thanks @endernewton for your work! I was wondering if you could kindly share your KNN classifier curve code somewhere, either here or in some other repo/gist?

Thanks again!
Kashif

Training with multi-crop

Thank you for your great work! I'm wondering if mocov3 could be further improved by multi-crop trick. Is there any recommended configuration? Thank you very much!

question about batch size

Hi,

For resnet50, the training batch size is 4096. However, I cannot afford to train with so large batch size. Is it expected to achieve similar result as 4096 if I use batch size of 512 or 256 to train?

About Linear Probe Accuracy of Resenet-50

I ran the code using the parameters specified in the CONFIG.md for Resnet-50, 1000 epoch pre-training and then fine-tuned using the linear probe method. All the parameters were kept the same as mentioned in the CONFIG.md file. However, after linear probing, my accuracy can reach only 74.36% and not 74.6%. I am not sure what I might be missing here.

Could you help me out?

Additionally, I ran the official checkpoint provided and that is able to achieve 74.6%.

ViT-Base fine-tuned checkpoints

Hey,
Thank you for providing the code and the checkpoints. I may have missed it, but I couldn't find checkpoints for the fine-tuned ViT-Base model. Could you please provide them?

Thanks,
Eliahu

Fine-tuning vs Linear probing

Hi,

I am wondering why there is a significant performance gap between Fine-tuning and Linear probing? Additionally,
why the fine-tuning is not used for ResNet model?

Thank you in advance!

Tensorflow version

Thank you for open source the Pytorch implementation. I wonder if the original tensorflow implementation have been released for the purpose of training on TPUs.

Question about linear probe

Hi, I see in your linear probing codes that validation acc is also monitored during training. I wonder what val acc did you report? Is it the best val acc or val acc obtained from the last epoch? Thank you.

Does this implementation support non-distributed training?

I found if I didn't use distributed training, i.e. set the --multiprocessing-distributed=False and use single GPU, there seems to be no problems in main_moco.py with

   torch.cuda.set_device(args.gpu)
   model = model.cuda(args.gpu)

However, this error occurred when training started

AssertionError: Default process group is not initialized

This error can be traced back to

File "~/moco-v3/moco/builder.py", line 68, in contrastive_loss
k = concat_all_gather(k)

and

File "~/moco-v3/moco/builder.py", line 178, in concat_all_gather
for _ in range(torch.distributed.get_world_size())]

This error is caused by computation of contrastive_loss, which still relies on distributed training. So I wonder if the non-distributed training is not supported even if we set multiprocessing-distributed=False.

How many TPUs ?

Hi,
In the moco-v3 paper there is a section about the computation time. It says that for the ViT-B, 100 epochs of imagenet take 2.1h hours. It is not clear if it 512 TPU devices or 512 TPU cores. To be precise, there are two types of TPUs available on google cloud: v2-[32,512] and v3-[32-2048]. Which one of them was used in the experiment and how many for each instance ?

队列

这里没有更新队列了么

what is the parameters used for linear classification in resnet50 experiment?

specifically, what are the parameters for

python main_lincls.py \
  -a [architecture] --lr [learning rate] \
  --dist-url 'tcp://localhost:10001' \
  --multiprocessing-distributed --world-size 1 --rank 0 \
  --pretrained [your checkpoint path]/[your checkpoint file].pth.tar \
  [your imagenet-folder with train and val folders]

in resnet50 experiment?

How about the loss converges during training?

During training, I find that the training loss is not monotonically decreasing, is it right? Does the loss number indicate the training situation? If not, when the pretraining finished, how much is the samples matching accuracy should be?

How to fine-tune?

大佬您好,想请教一下这个要怎么根据自己的数据集进行微调,好像提供的预训练权重缺少一部分内容
Hello, I would like to ask how to fine-tune this according to my own dataset, it seems that the provided pre-training weights are missing some content.
Thanks!

main_moco.py, line 247, in main_worker optimizer.load_state_dict(checkpoint['optimizer']) optimizer.py, line 137, in load_state_dict saved_groups = state_dict['param_groups'] TypeError: 'NoneType' object is not subscriptable

About the learning rate for resnet-50

I met an issue training resnet-50 with moco-v3. Under the distributed training setting with 16 V100 GPUs (each process only has one gpu, batch size 4096), I can get the training loss at about 27.2 in the 100-th epoch. When I lower the learning to 1.5e-4 (the default one is 0.6), the loss decreases more resonably and it reaches 27.0 in the 100-th epoch. Could you please verify if this is reasonable.

The question about the temperature of loss in MocoV3

I see the loss in MocoV2 just like:
loss = nn.CrossEntropyLoss()(logits / self.T, labels)
but the loss in MocoV3 just like:
loss = nn.CrossEntropyLoss()(logits / self.T, labels) * (2 * self.T)

I don't know why the loss should be multiplied by 2*temperature, it really confuses me. If anyone can answer my confusion.

the linear-prob acc1 of ViT-tiny on ImageNet is bad

Hi, thanks for your great work. I found a problem in our experiment:
first, I train a vit-tiny on imagenet in moco-v3
second, I fine-tuning the vit-tiny on imagenet with only train a classifiar (linear-prob)

And, I found the top1 acc only 32%, is right? Anyone has the MoCo-v3 results of vit-tiny on ImageNet?

Any hyper parameter suggestions for other model architectures?

I noticed that this repository only provide the results and experiment settings of ResNet50 and ViT series model.

And when I try to reproduce the results, I found that the final linear probing accuracy is very sensitive to the hyper parameters, such as learning rate, optimizer, augmentations, etc.

Are there any suggestions for training MoCo-v3 on other models, such as EfficientNet, ResNet101, etc. ? And how to adjust the hyper parameters for different model architectures?

Transfer learning performance of MoCo v3 on more challenging downstream dense prediction tasks.

Thanks for your great work!

I believe a goal of un-/self-supervised learning is to learn transferrable feature representations. I notice that MoCo v3 conduct a study on some smaller image classification datasets such as CIFAR-10/-100, and the performance is quite impressive.

But it seems that the performance of modern neural nets on these image classification datasets is somewhat saturated. I believe the community is more interested in more challenging downstream dense prediction tasks such as object detection and scene parsing. The specific task decoder layers such as DETR (for object detection) and SETR (for semantic segmentation or scene parsing) can be almost used out of the box. I wonder is there a plan on studying the transfer learning performance of MoCo v3 on downstream dense prediction tasks in the future?

# BUG

In main_moco.py, the code
'optimizer.load_state_dict(checkpoint['optimizer'])
scaler.load_state_dict(checkpoint['scaler']) '
report an error.

That means in ViT-Small pretrain files, the attribute of 'optimizer' and 'scaler' are missing.....

Hi,An error occurred in torch.multiprocessing.spawn

[libprotobuf FATAL google/protobuf/stubs/common.cc:87] This program was compiled against version 3.9.2 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.17.3). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "bazel-out/k8-opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 3.9.2 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.17.3). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "bazel-out/k8-opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
Traceback (most recent call last):
File "train.py", line 413, in
main()
File "train.py", line 140, in main
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File "/home/wxq/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/wxq/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/wxq/conda/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 130, in join
raise ProcessExitedException(
torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with signal SIGABRT

I don't know what error has occurred and ask for help,thank you.
my device: 4 nvidia 1080ti gpus
cuda version : 11.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.