bubbliiiing / yolov4-tiny-pytorch Goto Github PK
View Code? Open in Web Editor NEW这是一个YoloV4-tiny-pytorch的源码,可以用于训练自己的模型。
License: MIT License
这是一个YoloV4-tiny-pytorch的源码,可以用于训练自己的模型。
License: MIT License
大佬您好,为什么这个yolov4-tiny我测试fps只有10左右,您复现的yolov3-pytorch,和yolov4-pytorch,ssd-pytorch,我都测试了一下全部fps值都是10个左右,而且我分别在笔记本(i5-7300hq+gtx1050)上和台式机(i5-9600k+rtx2080)上测试,fps值都是10左右,这是怎么会事呢?
训练时发现target的数据是放在cpu的,即下面的代码:
if cuda: images = Variable(torch.from_numpy(images).type(torch.FloatTensor)).cuda() targets = [Variable(torch.from_numpy(ann).type(torch.FloatTensor)) for ann in targets]
但是当把target也加上.cuda()时,训练的时候计算loss会报错。
即把代码修改成:targets = [Variable(torch.from_numpy(ann).type(torch.FloatTensor).cuda()) for ann in targets]
up主能帮忙看一下吗?非常感谢
您好,請問要怎麼得到YOLOv4-tiny的參數量(parameter)呢?謝謝
類似以下網址這種,請問程式碼中的「model」要改成甚麼,並且可以在哪一個程式碼加入,謝謝
https://stackoverflow.com/questions/49201236/check-the-total-number-of-parameters-in-a-pytorch-model
yolo.py 与 get_dr_txt.py 中都有confidence和iou参数
如果要预测效果与map计算时一样,是不是这两个程序中的confidence和iou要一致?
泡泡哥好,我打算只对行人做检测,看了一篇论文用yolov3的,里面对loss部分做了修改,将分类loss直接删了,其他不变。我试了一下,最后出来的结果是行人的置信度普遍偏低相较于不删分类loss的,我就想问做单目标检测时需要删分类loss吗?bceloss怎么解释(此时只有背景和目标两类吗还是别的啥)
请教大神:
nets文件夹中是不是包含了所有的yolov4-tiny的网络结构,即nets+weights(训练好的)可以直接使用吗?
如果nets+weights可以直接使用,那么yolo.py文件又起了什么作用呢?
Thanks
你好,运行get_dr_txt.py的时候出现以下错误:RuntimeError: Error(s) in loading state_dict for YoloBody:
Unexpected key(s) in state_dict: "feat1_att.channelattention.fc1.weight", "feat1_att.channelattention.fc2.weight", "feat1_att.spatialattention.conv1.weight", "feat2_att.channelattention.fc1.weight", "feat2_att.channelattention.fc2.weight", "feat2_att.spatialattention.conv1.weight", "upsample_att.channelattention.fc1.weight", "upsample_att.channelattention.fc2.weight", "upsample_att.spatialattention.conv1.weight".
应该怎么修改代码呢?
您好,请教一下,我用cpu训练模型,训练到解冻阶段后内存一直在涨,一个epoch还没结束,内存和交换内存差不多都要占用掉50G。请问下是为什么?
i used the given pre-trained model yolov4_tiny_weights_coco.pth and evaluated it on coco val2017, and the map (0.5:0.95) only got 16.67%, could you please give the get_gt_txt.py on coco datasets.
大佬,你好。我训练过你用pytorch和keras写的tiny-yolo4代码,同样的数据集(9000张鱼类照片)和同样的训练策略(马赛克,标签平滑,余弦退火,batchsize,epoch)。最终的map结果是keras上的是45%,pytorch的是32%。请问你之前分别在pytorch和keras训练voc数据集时有遇到过map相差较大的情况吗?
大佬您好,请问在冻结参数进行训练的时候不需要过滤optimizer的参数嘛?比如
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=0.001)
/home/juling/anaconda3/envs/pythonProject/bin/python /home/juling/PycharmProjects/pythonProject/yolov4-tiny/predict.py
Loading weights into state dict...
Traceback (most recent call last):
File "/home/juling/PycharmProjects/pythonProject/yolov4-tiny/predict.py", line 16, in
yolo = YOLO()
File "/home/juling/PycharmProjects/pythonProject/yolov4-tiny/yolo.py", line 56, in init
self.generate()
File "/home/juling/PycharmProjects/pythonProject/yolov4-tiny/yolo.py", line 94, in generate
self.net.load_state_dict(state_dict)
File "/home/juling/anaconda3/envs/pythonProject/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1223, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for YoloBody:
Unexpected key(s) in state_dict: "backbone.resblock_body2.conv1.conv.weight", "backbone.resblock_body2.conv1.bn.weight", "backbone.resblock_body2.conv1.bn.bias", "backbone.resblock_body2.conv1.bn.running_mean", "backbone.resblock_body2.conv1.bn.running_var", "backbone.resblock_body2.conv1.bn.num_batches_tracked", "backbone.resblock_body2.conv2.conv.weight", "backbone.resblock_body2.conv2.bn.weight", "backbone.resblock_body2.conv2.bn.bias", "backbone.resblock_body2.conv2.bn.running_mean", "backbone.resblock_body2.conv2.bn.running_var", "backbone.resblock_body2.conv2.bn.num_batches_tracked", "backbone.resblock_body2.conv3.conv.weight", "backbone.resblock_body2.conv3.bn.weight", "backbone.resblock_body2.conv3.bn.bias", "backbone.resblock_body2.conv3.bn.running_mean", "backbone.resblock_body2.conv3.bn.running_var", "backbone.resblock_body2.conv3.bn.num_batches_tracked", "backbone.resblock_body2.conv4.conv.weight", "backbone.resblock_body2.conv4.bn.weight", "backbone.resblock_body2.conv4.bn.bias", "backbone.resblock_body2.conv4.bn.running_mean", "backbone.resblock_body2.conv4.bn.running_var", "backbone.resblock_body2.conv4.bn.num_batches_tracked".
Process finished with exit code 1
请问博主知道这个是为啥么,卡住半天了。。。
小白一个,请问怎样将训练好的结果用get_map.py算出来 map
请问程序中:
anchor_index = [[3, 4, 5], [1, 2, 3]][self.feature_length.index(in_w)]
为什么不是[[3, 4, 5], [0, 1, 2]]呢
大佬你好
yolov4-tiny 中resblock中有个通道减半的操作,这个在转换caffe时有哪个层能够实现这个功能吗?或者多个层组合实现这个功能,最好是不修改caffe的源码,非常感谢
我在嵌入式开发板(Rock Pi X)上运行yolov4-tiny,调用摄像头做实时检测,但是只能跑1.5帧左右,是不是我这开发版与深度学习无缘了
还有更小的网络推荐吗,如果用传统的方法比如SVM,是不是也吃配置呀
416*416 大概7000+图片,1660ti
def fit_one_epoch(net,yolo_losses,epoch,epoch_size,epoch_size_val,gen,genval,Epoch,cuda):其中yolo_losses参数在后边并没有使用,而使用的是yolo_loss,这个地方是不是有问题?
可以把 mAP0.5:0.95的结果也补充一下吗,谢谢啦
hello,anchor_index = [[3,4,5],[1,2,3]][self.feature_length.index(in_w)]是笔误吗还是特别写成1,2,3而非0,1,2?
你好,我打算自己在attention.py里加一些新的注意机制在用train里的phi调用,想请教在train里的phi调用SE,CBAM的逻辑走向是如何的,我应该怎么调用自己的新加的关注机制
416*416 大概7000+图片,1660ti
博主的代码dataloader.py文件中,__getitem__函数里面判断mosaic为真之后,最后有一行代码self.flag = bool(1-self.flag),请问这里设置flag标志位的作用是什么呢?
Hi, I have trained yolov4-tiny using darknet repo, Now I have .weights and .cfg , Now I have two questions.
1- How I can calculate the Precision and recall for each class using .weights? do I need to convert .wegiths to .pt first? if yes then I can convert?
2- Can I convert .weights to .caffe?
用了作者YOLOv4跑出的val_loss是1.7,map=98.1%,然后今天用tiny对相同的数据集设置相同的训练参数最后map结果是0,不知道哪里有问题
你好,我想问一下迭代次数是更改 Freeze_Epoch = 50、Unfreeze_Epoch = 100这两个参数吗?他们两个数据要保证二倍关系吗?
yolov4模型基本都检测出来了,但yolov4-tiny只能检出7成左右,cpu FPS达到了6-8之间。
您好,用自己的数据集训练,共10类实例,得到了能检测10类的模型。之后筛选出数据集中包含1类实例的所有图片,重新训练了一个只检测这一类实例的单类目标检测的模型,为什么这个单类目标检测模型的AP比10类目标检测模型对应类别的AP要低呢?
我大幅度提高了batch_size,从之前的1直接提高到128,然而,我的两个3090半死不活的只用了6gb显存,cpu占用率倒是明显高了不少。而且每个batch_size训练的时间也低了很多倍,换算下来,训练速度并没有提高多少。
希望能用coco训练这个工程,请问大佬有什么指导吗
I'm confused of the maske index , is it should be
anchor_index = [[3,4,5],[0,1,2]][self.feature_length.index(in_w)] ?
原anchor[10,14, 23,27, 37,58, 81,82, 135,169, 344,319],acc:62%,
聚类后 [33,51, 49,41, 53,154, 70,65, 98,119, 195,261],acc:70%,acc提升,但是map反而下降
我只修改了model_data\yolo_anchors.txt文件中的anchor,请问修改成自己数据集的anchor是不是还需要修改其他部分的代码啊
https://github.com/TymonXie/tymon
该框架建立初期,意在做0门槛的交互式ai框架,地址如上,如果您有兴趣,可以交流下
想问下大佬,在训练代码和测试代码里, anchor_index = [[3,4,5],[1,2,3]][self.feature_length.index(in_w)], self.anchors_mask = [[3,4,5],[1,2,3]]
为什么都没有用上第0组锚框呢,这是不是小目标精度不大好的原因。
gpu(rtx 2070s)
图片608X608
rt
yolov4-tiny-pytorch/nets/yolo4_tiny.py
Line 77 in 53851c5
P4 = torch.cat([feat1,P5_Upsample],axis=1) -> P4 = torch.cat([P5_Upsample,feat1],axis=1)
作者你好,我现在的数据集类别数较少(目前只有两个类别)且一张图片中只有一两个目标,像这种不是很复杂的数据集我感觉用YOLOv4-tiny就可以了,但也只是仅限于使用原模型而没有任何创新的工作,这种数据集相对较简单的情况使用YOLOv4-tiny如果想要创新的话应该从哪些方面入手可能会有效果?
请问如果修改了yolov4-tiny的backbone和neck部分后,无法使用coco和voc数据集得到的预训练权重,此时应该如何操作,谢谢大佬!
请问有cfg文件吗? 没有cfg文件,怎么才能在nvidia jetson 嵌入式平台下使用呢?貌似转成tensorRT引擎需要cfg文件,请帮助,感谢!
大佬您好,我想用您的项目训练自己的数据集然后部署到nvidia xavier上,但速度还是有点慢,想问一下大佬有考虑过将模型转换成tensorrt吗?或者有考虑过能出个教程吗?
yolo脚本中第110行的代码:self.anchors_mask = [[3,4,5],[1,2,3]]是不是有问题,是否应该改为self.anchors_mask = [[3,4,5],[0,1,2]]。相同的问题:在nets/yolo_training.py中231和359行的anchor_index = [[3,4,5],[1,2,3]][self.feature_length.index(in_w)]是不是应该改成anchor_index = [[3,4,5],[0,1,2]][self.feature_length.index(in_w)]?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.