fafa-dl / awesome-backbones Goto Github PK
View Code? Open in Web Editor NEWIntegrate deep learning models for image classification | Backbone learning/comparison/magic modification project
Integrate deep learning models for image classification | Backbone learning/comparison/magic modification project
可以在你模型里修改吗,具体是怎么修改呢?在head上把topk=(1, 5)变为topk=(1,2)吗?这样也不对
这个库非常棒,我在前几天使用你的库完成了一个小任务,我在resnet的基础上又增加了一个head,在这部分做了些修改(包括一些中间件),这块我看原本是只支持更换head,但是没有做对增加的支持,不知道是不是我没有仔细看,这块是否需要完善一下。还有single test部分,我看只对单一的图片做预测,这块是否需要实现对一个batch的支持,并且是否需要将val_pipeline这部分拆分出来,这样可以提前对数据进行处理。以上是我的使用感受,欢迎探讨,最终感谢你的框架。
Loading resnet18-f37072fd.pth
The model and loaded state dict do not match exactly
我在進行預訓練模型出錯,請問如何解決,謝謝
Traceback (most recent call last):
File "tools/train1.py", line 178, in
main()
File "tools/train1.py", line 172, in main
train(model,runner, lr_update_func, device, epoch, data_cfg.get('train').get('epoches'), meta)
File "/hy-tmp/Awe/utils/train_utils.py", line 215, in train
losses = model(images,targets=targets,return_loss=True)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/hy-tmp/Awe/models/build.py", line 125, in forward
return self.forward_train(x,**kwargs)
File "/hy-tmp/Awe/models/build.py", line 130, in forward_train
x = self.extract_feat(x)
File "/hy-tmp/Awe/models/build.py", line 113, in extract_feat
x = self.backbone(img)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/hy-tmp/Awe/configs/backbones/swin_transformer.py", line 409, in forward
x, hw_shape = self.patch_embed(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/hy-tmp/Awe/configs/common/transformer.py", line 221, in forward
x = self.projection(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
我下载的pth文件可以进行load,但是在训练之后为什么就不可以了,是因为模型结构发生了变化吗,那我应该怎么加载呢,比如当我想要进行部署的时候
1)可以考虑把每个epoch训练的loss,精度,验证的精度和loss等相关信息存放在一个CSV文件中,这样可以方便之后画图
2)可以考虑把每次训练的控制台输出同时添加到某个日志文件中
只是友善的建议,博主可以视情况采纳
请问,各个模型的输入图像是256256,处理后变成224224。如果我想让输入图像大一些,以提高准确率,该怎样设置?比如我的原始图像是1280*960的,请问我在哪里修改?谢谢!
models 和 config 这俩文件夹,是不是反了?
maybe it update the title? i cant find and install that module. My mistake "from utils.train_utils import get_info ModuleNotFoundError: No module named 'utils.train_utils'"
up, 关于多标签分类 ,我将如何处理自己的数据集
您好,我想问一下怎么使用single_test.py推理视频
另外Conformer、PoolFormer、TNT的训练配置文件是还没有更新嘛
up主好棒!!继续冲冲(。ì _ í。)
Loading Train_Epoch077-Loss0.423.pth <All keys matched successfully> Traceback (most recent call last): File "tools/single_test.py", line 42, in <module> main() File "tools/single_test.py", line 36, in main result = inference_model(model, args.img, val_pipeline, classes_names) File "/home/Awesome-Backbones/utils/inference.py", line 48, in inference_model if val_pipeline[0]['type'] != 'LoadImageFromFile': TypeError: 'NoneType' object is not subscriptable
运行 python tools/evaluation.py,能正常显示,测试 python tools/single_test.py 就报错了... 打印 val_pipeline,是None,想问有什么原因导致发生的呢?谢谢
为了修改这个小bug,我修改了utils中的inference中部分语句以及单张图片测试的部分代码,目前已解决该bug
修改处为:
inference代码中新增label_names
def inference_model(model, image, val_pipeline, classes_names,label_names):
以及result['pred_class'] = classes_names[label_names.index(result['pred_label'])]
单张图片测试代码中修改了
classes_names, label_names = get_info(args.classes_map)
result = inference_model(model, args.img, val_pipeline, classes_names,label_names)
经测试修改后的代码已不会出现annoation顺序影响测试结果显示的小bug
👀
Hello!
First of all, I would like to thank you for great job! But it seems that EfficientNetV2 don't work. I tried this:
python tools/train.py models/efficientnetv2/efficientnetv2_xl_512.py
And received an error:
"AttributeError: 'EnhancedConvModule' object has no attribute 'in_channels'"
Could you fix it, please?
模型评估的结果是测试集的吗
请问类别激活图可视化中的EigenGradCAM方法是在哪篇论文中提出的呢,想了解一下原理
请问可以得到模型的推理速度,如对多张模型的推理耗时ms和fps吗?希望可以添加
可以增加输出backbone提取的feature的功能,并保存npy文件到指定文件夹。这样可以方便做分类降维可视化。还有就是可以提供打印网络结构的功能,最后再次谢谢大佬提供的框架!
感谢您提供的代码库,我目前的任务是关于含量多少的分类问题,如何通过修改head输出,使其能够成为解决回归问题的head,比如说我现在建立了三个梯度1,10,100分类模型,那我怎样预测5或者50的梯度,需要怎样在您的基础上修改
我想尝试改用lion优化器试下效果,我应该修改哪些地方?
可以增加注意力机制功能,让用户选择某一种网络是否添加注意力机制
报错如下:
(torch) E:\python\Awesome-Backbones>python tools/pth2pt.py models\repvgg\repvgg_A0.py
Loading Train_Epoch042-Loss0.012.pth
Traceback (most recent call last):
File "tools/pth2pt.py", line 125, in
main()
File "tools/pth2pt.py", line 121, in main
traced_script_module = torch.jit.trace(model, example)
File "D:\anaconda3\envs\torch\lib\site-packages\torch\jit_trace.py", line 742, in trace
_module_class,
File "D:\anaconda3\envs\torch\lib\site-packages\torch\jit_trace.py", line 940, in trace_module
_force_outplace,
File "D:\anaconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "D:\anaconda3\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "E:\python\Awesome-Backbones\models\build.py", line 125, in forward
return self.forward_train(x,**kwargs)
TypeError: forward_train() missing 1 required positional argument: 'targets'
请问博主有做这方面的工作不,期待您的回复
请问一下,有什么方法可以查看模型的计算量flops吗
最近在训练自己的数据集时,发现几个Transformer网络有时会出现loss=nan的情况,并且一旦出现这个情况,训练结果就不会发生任何的update.
想知道loss=nan会在怎样的情况下发生?谢谢
(pytorch-gpu) C:\ai-project\Awesome-Backbones-main>python tools/evaluation.py models/shufflenet/shufflenet_v2.py
tools/evaluation.py:1: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Loading Val_Epoch001-Acc100.000.pth
Traceback (most recent call last):
File "tools/evaluation.py", line 169, in
main()
File "tools/evaluation.py", line 146, in main
test_dataset = Mydataset(test_datas, val_pipeline)
File "C:\ai-project\Awesome-Backbones-main\utils\dataloader.py", line 16, in init
self.pipeline = Compose(self.cfg)
File "C:\ai-project\Awesome-Backbones-main\core\datasets\compose.py", line 23, in init
transform = build_from_cfg(transform, PIPELINES)
File "C:\ai-project\Awesome-Backbones-main\core\datasets\build.py", line 61, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'RandomHorizontalFlip is not in the pipeline registry'
实际生产中会有很多数据不平衡的情况,可否集成mixup,balance-mixup等数据增强的方法
+Model info---------+----------------------+---------------+-----------------+
| Backbone | Neck | Head | Loss |
+-------------------+----------------------+---------------+-----------------+
| SwinTransformerV2 | GlobalAveragePooling | LinearClsHead | LabelSmoothLoss |
+-------------------+----------------------+---------------+-----------------+
Initialize the weights.
Loading swinv2_base_patch4_window8_256.pth
The model and loaded state dict do not match exactly
unexpected key in source state_dict: model
missing keys in source state_dict: backbone.patch_embed.projection.weight, backbone.patch_embed.projection.bias, backbone.patch_embed.norm.weight, backbone.patch_embed.norm.bias, backbone.stages.0.blocks.0.attn.w_msa.logit_scale, backbone.stages.0.blocks.0.attn.w_msa.q_bias, backbone.stages.0.blocks.0.attn.w_msa.v_bias, backbone.stages.0.blocks.0.attn.w_msa.relative_coords_table, backbone.stages.0.blocks.0.attn.w_msa.relative_position_index, backbone.stages.0.blocks.0.attn.w_msa.cpb_mlp.0.weight, backbone.stages.0.blocks.0.attn.w_msa.cpb_mlp.0.bias, backbone.stages.0.blocks.0.attn.w_msa.cpb_mlp.2.weight, backbone.stages.0.blocks.0.attn.w_msa.qkv.weight, backbone.stages.0.blocks.0.attn.w_msa.proj.weight, backbone.stages.0.blocks.0.attn.w_msa.proj.bias, backbone.stages.0.blocks.0.norm1.weight, backbone.stages.0.blocks.0.norm1.bias, backbone.stages.0.blocks.0.ffn.layers.0.0.weight, backbone.stages.0.blocks.0.ffn.layers.0.0.bias, backbone.stages.0.blocks.0.ffn.layers.1.weight, backbone.stages.0.blocks.0.ffn.layers.1.bias, backbone.stages.0.blocks.0.norm2.weight, backbone.stages.0.blocks.0.norm2.bias, backbone.stages.0.blocks.1.attn.w_msa.logit_scale, backbone.stages.0.blocks.1.attn.w_msa.q_bias, backbone.stages.0.blocks.1.attn.w_msa.v_bias, backbone.stages.0.blocks.1.attn.w_msa.relative_coords_table, backbone.stages.0.blocks.1.attn.w_msa.relative_position_index, backbone.stages.0.blocks.1.attn.w_msa.cpb_mlp.0.weight, backbone.stages.0.blocks.1.attn.w_msa.cpb_mlp.0.bias, backbone.stages.0.blocks.1.attn.w_msa.cpb_mlp.2.weight, backbone.stages.0.blocks.1.attn.w_msa.qkv.weight, backbone.stages.0.blocks.1.attn.w_msa.proj.weight, backbone.stages.0.blocks.1.attn.w_msa.proj.bias, backbone.stages.0.blocks.1.norm1.weight, backbone.stages.0.blocks.1.norm1.bias, backbone.stages.0.blocks.1.ffn.layers.0.0.weight, backbone.stages.0.blocks.1.ffn.layers.0.0.bias, backbone.stages.0.blocks.1.ffn.layers.1.weight, backbone.stages.0.blocks.1.ffn.layers.1.bias, backbone.stages.0.blocks.1.norm2.weight, backbone.stages.0.blocks.1.norm2.bias, backbone.stages.1.downsample.reduction.weight, backbone.stages.1.downsample.norm.weight, backbone.stages.1.downsample.norm.bias, backbone.stages.1.blocks.0.attn.w_msa.logit_scale, backbone.stages.1.blocks.0.attn.w_msa.q_bias, backbone.stages.1.blocks.0.attn.w_msa.v_bias, backbone.stages.1.blocks.0.attn.w_msa.relative_coords_table, backbone.stages.1.blocks.0.attn.w_msa.relative_position_index, backbone.stages.1.blocks.0.attn.w_msa.cpb_mlp.0.weight, backbone.stages.1.blocks.0.attn.w_msa.cpb_mlp.0.bias, backbone.stages.1.blocks.0.attn.w_msa.cpb_mlp.2.weight, backbone.stages.1.blocks.0.attn.w_msa.qkv.weight, backbone.stages.1.blocks.0.attn.w_msa.proj.weight, backbone.stages.1.blocks.0.attn.w_msa.proj.bias, backbone.stages.1.blocks.0.norm1.weight, backbone.stages.1.blocks.0.norm1.bias, backbone.stages.1.blocks.0.ffn.layers.0.0.weight, backbone.stages.1.blocks.0.ffn.layers.0.0.bias, backbone.stages.1.blocks.0.ffn.layers.1.weight, backbone.stages.1.blocks.0.ffn.layers.1.bias, backbone.stages.1.blocks.0.norm2.weight, backbone.stages.1.blocks.0.norm2.bias, backbone.stages.1.blocks.1.attn.w_msa.logit_scale, backbone.stages.1.blocks.1.attn.w_msa.q_bias, backbone.stages.1.blocks.1.attn.w_msa.v_bias, backbone.stages.1.blocks.1.attn.w_msa.relative_coords_table, backbone.stages.1.blocks.1.attn.w_msa.relative_position_index, backbone.stages.1.blocks.1.attn.w_msa.cpb_mlp.0.weight, backbone.stages.1.blocks.1.attn.w_msa.cpb_mlp.0.bias, backbone.stages.1.blocks.1.attn.w_msa.cpb_mlp.2.weight, backbone.stages.1.blocks.1.attn.w_msa.qkv.weight, backbone.stages.1.blocks.1.attn.w_msa.proj.weight, backbone.stages.1.blocks.1.attn.w_msa.proj.bias, backbone.stages.1.blocks.1.norm1.weight, backbone.stages.1.blocks.1.norm1.bias, backbone.stages.1.blocks.1.ffn.layers.0.0.weight, backbone.stages.1.blocks.1.ffn.layers.0.0.bias, backbone.stages.1.blocks.1.ffn.layers.1.weight, backbone.stages.1.blocks.1.ffn.layers.1.bias, backbone.stages.1.blocks.1.norm2.weight, backbone.stages.1.blocks.1.norm2.bias, backbone.stages.2.downsample.reduction.weight, backbone.stages.2.downsample.norm.weight, backbone.stages.2.downsample.norm.bias, backbone.stages.2.blocks.0.attn.w_msa.logit_scale, backbone.stages.2.blocks.0.attn.w_msa.q_bias, backbone.stages.2.blocks.0.attn.w_msa.v_bias, backbone.stages.2.blocks.0.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.0.attn.w_msa.relative_position_index, backbone.stages.2.blocks.0.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.0.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.0.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.0.attn.w_msa.qkv.weight, backbone.stages.2.blocks.0.attn.w_msa.proj.weight, backbone.stages.2.blocks.0.attn.w_msa.proj.bias, backbone.stages.2.blocks.0.norm1.weight, backbone.stages.2.blocks.0.norm1.bias, backbone.stages.2.blocks.0.ffn.layers.0.0.weight, backbone.stages.2.blocks.0.ffn.layers.0.0.bias, backbone.stages.2.blocks.0.ffn.layers.1.weight, backbone.stages.2.blocks.0.ffn.layers.1.bias, backbone.stages.2.blocks.0.norm2.weight, backbone.stages.2.blocks.0.norm2.bias, backbone.stages.2.blocks.1.attn.w_msa.logit_scale, backbone.stages.2.blocks.1.attn.w_msa.q_bias, backbone.stages.2.blocks.1.attn.w_msa.v_bias, backbone.stages.2.blocks.1.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.1.attn.w_msa.relative_position_index, backbone.stages.2.blocks.1.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.1.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.1.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.1.attn.w_msa.qkv.weight, backbone.stages.2.blocks.1.attn.w_msa.proj.weight, backbone.stages.2.blocks.1.attn.w_msa.proj.bias, backbone.stages.2.blocks.1.norm1.weight, backbone.stages.2.blocks.1.norm1.bias, backbone.stages.2.blocks.1.ffn.layers.0.0.weight, backbone.stages.2.blocks.1.ffn.layers.0.0.bias, backbone.stages.2.blocks.1.ffn.layers.1.weight, backbone.stages.2.blocks.1.ffn.layers.1.bias, backbone.stages.2.blocks.1.norm2.weight, backbone.stages.2.blocks.1.norm2.bias, backbone.stages.2.blocks.2.attn.w_msa.logit_scale, backbone.stages.2.blocks.2.attn.w_msa.q_bias, backbone.stages.2.blocks.2.attn.w_msa.v_bias, backbone.stages.2.blocks.2.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.2.attn.w_msa.relative_position_index, backbone.stages.2.blocks.2.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.2.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.2.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.2.attn.w_msa.qkv.weight, backbone.stages.2.blocks.2.attn.w_msa.proj.weight, backbone.stages.2.blocks.2.attn.w_msa.proj.bias, backbone.stages.2.blocks.2.norm1.weight, backbone.stages.2.blocks.2.norm1.bias, backbone.stages.2.blocks.2.ffn.layers.0.0.weight, backbone.stages.2.blocks.2.ffn.layers.0.0.bias, backbone.stages.2.blocks.2.ffn.layers.1.weight, backbone.stages.2.blocks.2.ffn.layers.1.bias, backbone.stages.2.blocks.2.norm2.weight, backbone.stages.2.blocks.2.norm2.bias, backbone.stages.2.blocks.3.attn.w_msa.logit_scale, backbone.stages.2.blocks.3.attn.w_msa.q_bias, backbone.stages.2.blocks.3.attn.w_msa.v_bias, backbone.stages.2.blocks.3.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.3.attn.w_msa.relative_position_index, backbone.stages.2.blocks.3.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.3.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.3.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.3.attn.w_msa.qkv.weight, backbone.stages.2.blocks.3.attn.w_msa.proj.weight, backbone.stages.2.blocks.3.attn.w_msa.proj.bias, backbone.stages.2.blocks.3.norm1.weight, backbone.stages.2.blocks.3.norm1.bias, backbone.stages.2.blocks.3.ffn.layers.0.0.weight, backbone.stages.2.blocks.3.ffn.layers.0.0.bias, backbone.stages.2.blocks.3.ffn.layers.1.weight, backbone.stages.2.blocks.3.ffn.layers.1.bias, backbone.stages.2.blocks.3.norm2.weight, backbone.stages.2.blocks.3.norm2.bias, backbone.stages.2.blocks.4.attn.w_msa.logit_scale, backbone.stages.2.blocks.4.attn.w_msa.q_bias, backbone.stages.2.blocks.4.attn.w_msa.v_bias, backbone.stages.2.blocks.4.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.4.attn.w_msa.relative_position_index, backbone.stages.2.blocks.4.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.4.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.4.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.4.attn.w_msa.qkv.weight, backbone.stages.2.blocks.4.attn.w_msa.proj.weight, backbone.stages.2.blocks.4.attn.w_msa.proj.bias, backbone.stages.2.blocks.4.norm1.weight, backbone.stages.2.blocks.4.norm1.bias, backbone.stages.2.blocks.4.ffn.layers.0.0.weight, backbone.stages.2.blocks.4.ffn.layers.0.0.bias, backbone.stages.2.blocks.4.ffn.layers.1.weight, backbone.stages.2.blocks.4.ffn.layers.1.bias, backbone.stages.2.blocks.4.norm2.weight, backbone.stages.2.blocks.4.norm2.bias, backbone.stages.2.blocks.5.attn.w_msa.logit_scale, backbone.stages.2.blocks.5.attn.w_msa.q_bias, backbone.stages.2.blocks.5.attn.w_msa.v_bias, backbone.stages.2.blocks.5.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.5.attn.w_msa.relative_position_index, backbone.stages.2.blocks.5.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.5.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.5.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.5.attn.w_msa.qkv.weight, backbone.stages.2.blocks.5.attn.w_msa.proj.weight, backbone.stages.2.blocks.5.attn.w_msa.proj.bias, backbone.stages.2.blocks.5.norm1.weight, backbone.stages.2.blocks.5.norm1.bias, backbone.stages.2.blocks.5.ffn.layers.0.0.weight, backbone.stages.2.blocks.5.ffn.layers.0.0.bias, backbone.stages.2.blocks.5.ffn.layers.1.weight, backbone.stages.2.blocks.5.ffn.layers.1.bias, backbone.stages.2.blocks.5.norm2.weight, backbone.stages.2.blocks.5.norm2.bias, backbone.stages.2.blocks.6.attn.w_msa.logit_scale, backbone.stages.2.blocks.6.attn.w_msa.q_bias, backbone.stages.2.blocks.6.attn.w_msa.v_bias, backbone.stages.2.blocks.6.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.6.attn.w_msa.relative_position_index, backbone.stages.2.blocks.6.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.6.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.6.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.6.attn.w_msa.qkv.weight, backbone.stages.2.blocks.6.attn.w_msa.proj.weight, backbone.stages.2.blocks.6.attn.w_msa.proj.bias, backbone.stages.2.blocks.6.norm1.weight, backbone.stages.2.blocks.6.norm1.bias, backbone.stages.2.blocks.6.ffn.layers.0.0.weight, backbone.stages.2.blocks.6.ffn.layers.0.0.bias, backbone.stages.2.blocks.6.ffn.layers.1.weight, backbone.stages.2.blocks.6.ffn.layers.1.bias, backbone.stages.2.blocks.6.norm2.weight, backbone.stages.2.blocks.6.norm2.bias, backbone.stages.2.blocks.7.attn.w_msa.logit_scale, backbone.stages.2.blocks.7.attn.w_msa.q_bias, backbone.stages.2.blocks.7.attn.w_msa.v_bias, backbone.stages.2.blocks.7.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.7.attn.w_msa.relative_position_index, backbone.stages.2.blocks.7.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.7.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.7.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.7.attn.w_msa.qkv.weight, backbone.stages.2.blocks.7.attn.w_msa.proj.weight, backbone.stages.2.blocks.7.attn.w_msa.proj.bias, backbone.stages.2.blocks.7.norm1.weight, backbone.stages.2.blocks.7.norm1.bias, backbone.stages.2.blocks.7.ffn.layers.0.0.weight, backbone.stages.2.blocks.7.ffn.layers.0.0.bias, backbone.stages.2.blocks.7.ffn.layers.1.weight, backbone.stages.2.blocks.7.ffn.layers.1.bias, backbone.stages.2.blocks.7.norm2.weight, backbone.stages.2.blocks.7.norm2.bias, backbone.stages.2.blocks.8.attn.w_msa.logit_scale, backbone.stages.2.blocks.8.attn.w_msa.q_bias, backbone.stages.2.blocks.8.attn.w_msa.v_bias, backbone.stages.2.blocks.8.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.8.attn.w_msa.relative_position_index, backbone.stages.2.blocks.8.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.8.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.8.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.8.attn.w_msa.qkv.weight, backbone.stages.2.blocks.8.attn.w_msa.proj.weight, backbone.stages.2.blocks.8.attn.w_msa.proj.bias, backbone.stages.2.blocks.8.norm1.weight, backbone.stages.2.blocks.8.norm1.bias, backbone.stages.2.blocks.8.ffn.layers.0.0.weight, backbone.stages.2.blocks.8.ffn.layers.0.0.bias, backbone.stages.2.blocks.8.ffn.layers.1.weight, backbone.stages.2.blocks.8.ffn.layers.1.bias, backbone.stages.2.blocks.8.norm2.weight, backbone.stages.2.blocks.8.norm2.bias, backbone.stages.2.blocks.9.attn.w_msa.logit_scale, backbone.stages.2.blocks.9.attn.w_msa.q_bias, backbone.stages.2.blocks.9.attn.w_msa.v_bias, backbone.stages.2.blocks.9.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.9.attn.w_msa.relative_position_index, backbone.stages.2.blocks.9.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.9.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.9.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.9.attn.w_msa.qkv.weight, backbone.stages.2.blocks.9.attn.w_msa.proj.weight, backbone.stages.2.blocks.9.attn.w_msa.proj.bias, backbone.stages.2.blocks.9.norm1.weight, backbone.stages.2.blocks.9.norm1.bias, backbone.stages.2.blocks.9.ffn.layers.0.0.weight, backbone.stages.2.blocks.9.ffn.layers.0.0.bias, backbone.stages.2.blocks.9.ffn.layers.1.weight, backbone.stages.2.blocks.9.ffn.layers.1.bias, backbone.stages.2.blocks.9.norm2.weight, backbone.stages.2.blocks.9.norm2.bias, backbone.stages.2.blocks.10.attn.w_msa.logit_scale, backbone.stages.2.blocks.10.attn.w_msa.q_bias, backbone.stages.2.blocks.10.attn.w_msa.v_bias, backbone.stages.2.blocks.10.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.10.attn.w_msa.relative_position_index, backbone.stages.2.blocks.10.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.10.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.10.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.10.attn.w_msa.qkv.weight, backbone.stages.2.blocks.10.attn.w_msa.proj.weight, backbone.stages.2.blocks.10.attn.w_msa.proj.bias, backbone.stages.2.blocks.10.norm1.weight, backbone.stages.2.blocks.10.norm1.bias, backbone.stages.2.blocks.10.ffn.layers.0.0.weight, backbone.stages.2.blocks.10.ffn.layers.0.0.bias, backbone.stages.2.blocks.10.ffn.layers.1.weight, backbone.stages.2.blocks.10.ffn.layers.1.bias, backbone.stages.2.blocks.10.norm2.weight, backbone.stages.2.blocks.10.norm2.bias, backbone.stages.2.blocks.11.attn.w_msa.logit_scale, backbone.stages.2.blocks.11.attn.w_msa.q_bias, backbone.stages.2.blocks.11.attn.w_msa.v_bias, backbone.stages.2.blocks.11.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.11.attn.w_msa.relative_position_index, backbone.stages.2.blocks.11.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.11.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.11.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.11.attn.w_msa.qkv.weight, backbone.stages.2.blocks.11.attn.w_msa.proj.weight, backbone.stages.2.blocks.11.attn.w_msa.proj.bias, backbone.stages.2.blocks.11.norm1.weight, backbone.stages.2.blocks.11.norm1.bias, backbone.stages.2.blocks.11.ffn.layers.0.0.weight, backbone.stages.2.blocks.11.ffn.layers.0.0.bias, backbone.stages.2.blocks.11.ffn.layers.1.weight, backbone.stages.2.blocks.11.ffn.layers.1.bias, backbone.stages.2.blocks.11.norm2.weight, backbone.stages.2.blocks.11.norm2.bias, backbone.stages.2.blocks.12.attn.w_msa.logit_scale, backbone.stages.2.blocks.12.attn.w_msa.q_bias, backbone.stages.2.blocks.12.attn.w_msa.v_bias, backbone.stages.2.blocks.12.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.12.attn.w_msa.relative_position_index, backbone.stages.2.blocks.12.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.12.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.12.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.12.attn.w_msa.qkv.weight, backbone.stages.2.blocks.12.attn.w_msa.proj.weight, backbone.stages.2.blocks.12.attn.w_msa.proj.bias, backbone.stages.2.blocks.12.norm1.weight, backbone.stages.2.blocks.12.norm1.bias, backbone.stages.2.blocks.12.ffn.layers.0.0.weight, backbone.stages.2.blocks.12.ffn.layers.0.0.bias, backbone.stages.2.blocks.12.ffn.layers.1.weight, backbone.stages.2.blocks.12.ffn.layers.1.bias, backbone.stages.2.blocks.12.norm2.weight, backbone.stages.2.blocks.12.norm2.bias, backbone.stages.2.blocks.13.attn.w_msa.logit_scale, backbone.stages.2.blocks.13.attn.w_msa.q_bias, backbone.stages.2.blocks.13.attn.w_msa.v_bias, backbone.stages.2.blocks.13.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.13.attn.w_msa.relative_position_index, backbone.stages.2.blocks.13.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.13.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.13.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.13.attn.w_msa.qkv.weight, backbone.stages.2.blocks.13.attn.w_msa.proj.weight, backbone.stages.2.blocks.13.attn.w_msa.proj.bias, backbone.stages.2.blocks.13.norm1.weight, backbone.stages.2.blocks.13.norm1.bias, backbone.stages.2.blocks.13.ffn.layers.0.0.weight, backbone.stages.2.blocks.13.ffn.layers.0.0.bias, backbone.stages.2.blocks.13.ffn.layers.1.weight, backbone.stages.2.blocks.13.ffn.layers.1.bias, backbone.stages.2.blocks.13.norm2.weight, backbone.stages.2.blocks.13.norm2.bias, backbone.stages.2.blocks.14.attn.w_msa.logit_scale, backbone.stages.2.blocks.14.attn.w_msa.q_bias, backbone.stages.2.blocks.14.attn.w_msa.v_bias, backbone.stages.2.blocks.14.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.14.attn.w_msa.relative_position_index, backbone.stages.2.blocks.14.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.14.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.14.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.14.attn.w_msa.qkv.weight, backbone.stages.2.blocks.14.attn.w_msa.proj.weight, backbone.stages.2.blocks.14.attn.w_msa.proj.bias, backbone.stages.2.blocks.14.norm1.weight, backbone.stages.2.blocks.14.norm1.bias, backbone.stages.2.blocks.14.ffn.layers.0.0.weight, backbone.stages.2.blocks.14.ffn.layers.0.0.bias, backbone.stages.2.blocks.14.ffn.layers.1.weight, backbone.stages.2.blocks.14.ffn.layers.1.bias, backbone.stages.2.blocks.14.norm2.weight, backbone.stages.2.blocks.14.norm2.bias, backbone.stages.2.blocks.15.attn.w_msa.logit_scale, backbone.stages.2.blocks.15.attn.w_msa.q_bias, backbone.stages.2.blocks.15.attn.w_msa.v_bias, backbone.stages.2.blocks.15.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.15.attn.w_msa.relative_position_index, backbone.stages.2.blocks.15.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.15.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.15.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.15.attn.w_msa.qkv.weight, backbone.stages.2.blocks.15.attn.w_msa.proj.weight, backbone.stages.2.blocks.15.attn.w_msa.proj.bias, backbone.stages.2.blocks.15.norm1.weight, backbone.stages.2.blocks.15.norm1.bias, backbone.stages.2.blocks.15.ffn.layers.0.0.weight, backbone.stages.2.blocks.15.ffn.layers.0.0.bias, backbone.stages.2.blocks.15.ffn.layers.1.weight, backbone.stages.2.blocks.15.ffn.layers.1.bias, backbone.stages.2.blocks.15.norm2.weight, backbone.stages.2.blocks.15.norm2.bias, backbone.stages.2.blocks.16.attn.w_msa.logit_scale, backbone.stages.2.blocks.16.attn.w_msa.q_bias, backbone.stages.2.blocks.16.attn.w_msa.v_bias, backbone.stages.2.blocks.16.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.16.attn.w_msa.relative_position_index, backbone.stages.2.blocks.16.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.16.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.16.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.16.attn.w_msa.qkv.weight, backbone.stages.2.blocks.16.attn.w_msa.proj.weight, backbone.stages.2.blocks.16.attn.w_msa.proj.bias, backbone.stages.2.blocks.16.norm1.weight, backbone.stages.2.blocks.16.norm1.bias, backbone.stages.2.blocks.16.ffn.layers.0.0.weight, backbone.stages.2.blocks.16.ffn.layers.0.0.bias, backbone.stages.2.blocks.16.ffn.layers.1.weight, backbone.stages.2.blocks.16.ffn.layers.1.bias, backbone.stages.2.blocks.16.norm2.weight, backbone.stages.2.blocks.16.norm2.bias, backbone.stages.2.blocks.17.attn.w_msa.logit_scale, backbone.stages.2.blocks.17.attn.w_msa.q_bias, backbone.stages.2.blocks.17.attn.w_msa.v_bias, backbone.stages.2.blocks.17.attn.w_msa.relative_coords_table, backbone.stages.2.blocks.17.attn.w_msa.relative_position_index, backbone.stages.2.blocks.17.attn.w_msa.cpb_mlp.0.weight, backbone.stages.2.blocks.17.attn.w_msa.cpb_mlp.0.bias, backbone.stages.2.blocks.17.attn.w_msa.cpb_mlp.2.weight, backbone.stages.2.blocks.17.attn.w_msa.qkv.weight, backbone.stages.2.blocks.17.attn.w_msa.proj.weight, backbone.stages.2.blocks.17.attn.w_msa.proj.bias, backbone.stages.2.blocks.17.norm1.weight, backbone.stages.2.blocks.17.norm1.bias, backbone.stages.2.blocks.17.ffn.layers.0.0.weight, backbone.stages.2.blocks.17.ffn.layers.0.0.bias, backbone.stages.2.blocks.17.ffn.layers.1.weight, backbone.stages.2.blocks.17.ffn.layers.1.bias, backbone.stages.2.blocks.17.norm2.weight, backbone.stages.2.blocks.17.norm2.bias, backbone.stages.3.downsample.reduction.weight, backbone.stages.3.downsample.norm.weight, backbone.stages.3.downsample.norm.bias, backbone.stages.3.blocks.0.attn.w_msa.logit_scale, backbone.stages.3.blocks.0.attn.w_msa.q_bias, backbone.stages.3.blocks.0.attn.w_msa.v_bias, backbone.stages.3.blocks.0.attn.w_msa.relative_coords_table, backbone.stages.3.blocks.0.attn.w_msa.relative_position_index, backbone.stages.3.blocks.0.attn.w_msa.cpb_mlp.0.weight, backbone.stages.3.blocks.0.attn.w_msa.cpb_mlp.0.bias, backbone.stages.3.blocks.0.attn.w_msa.cpb_mlp.2.weight, backbone.stages.3.blocks.0.attn.w_msa.qkv.weight, backbone.stages.3.blocks.0.attn.w_msa.proj.weight, backbone.stages.3.blocks.0.attn.w_msa.proj.bias, backbone.stages.3.blocks.0.norm1.weight, backbone.stages.3.blocks.0.norm1.bias, backbone.stages.3.blocks.0.ffn.layers.0.0.weight, backbone.stages.3.blocks.0.ffn.layers.0.0.bias, backbone.stages.3.blocks.0.ffn.layers.1.weight, backbone.stages.3.blocks.0.ffn.layers.1.bias, backbone.stages.3.blocks.0.norm2.weight, backbone.stages.3.blocks.0.norm2.bias, backbone.stages.3.blocks.1.attn.w_msa.logit_scale, backbone.stages.3.blocks.1.attn.w_msa.q_bias, backbone.stages.3.blocks.1.attn.w_msa.v_bias, backbone.stages.3.blocks.1.attn.w_msa.relative_coords_table, backbone.stages.3.blocks.1.attn.w_msa.relative_position_index, backbone.stages.3.blocks.1.attn.w_msa.cpb_mlp.0.weight, backbone.stages.3.blocks.1.attn.w_msa.cpb_mlp.0.bias, backbone.stages.3.blocks.1.attn.w_msa.cpb_mlp.2.weight, backbone.stages.3.blocks.1.attn.w_msa.qkv.weight, backbone.stages.3.blocks.1.attn.w_msa.proj.weight, backbone.stages.3.blocks.1.attn.w_msa.proj.bias, backbone.stages.3.blocks.1.norm1.weight, backbone.stages.3.blocks.1.norm1.bias, backbone.stages.3.blocks.1.ffn.layers.0.0.weight, backbone.stages.3.blocks.1.ffn.layers.0.0.bias, backbone.stages.3.blocks.1.ffn.layers.1.weight, backbone.stages.3.blocks.1.ffn.layers.1.bias, backbone.stages.3.blocks.1.norm2.weight, backbone.stages.3.blocks.1.norm2.bias, backbone.norm3.weight, backbone.norm3.bias, head.fc.weight, head.fc.bias
/home/user/miniconda3/envs/wmh/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
您好,我想问一下存储库中提供的预训练权重是在imagenet上预训练得到的吗?
目前的源码是只支持单卡训练,但是我想加快训练生成模型,如何进行多卡训练呢
Originally posted by @joanna63 in #1 (comment)
作者您好,最近在使用您的这个项目在测试实验,请问如何在哪里可以修改输入图片大小,我的图片是自制的比如39、189。模型的默认输入是224224。直接调用也没问题,但是好像是resize图片把图片变成了224224,我想使用原本图片的大小作为输入
您好,我制作了自己的数据集,有12个分类,但是为什么会报这个错误,迁移学习那里已经设置了true了
请问这个实时图像的创建在哪一个文件中 (想知道横纵坐标,颜色等详细信息)
想把输出导出到文件里,但是不知道从哪儿下手
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.