nightsnack / yolobile Goto Github PK
View Code? Open in Web Editor NEWThis is the implementation of YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design
License: GNU General Public License v3.0
This is the implementation of YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design
License: GNU General Public License v3.0
The drive link given in model checkpoints is not working. Another link was given there but Baidu is not supported in my country.
Hello, where is the csdarknet53s-panet-spp.cfg file? I didn't find it.
Hi, guys
I want to use the tencent ncnn library to run YOLObile in the android mobile can you provide me a trained onnx file? or tell me how to export onnx file?
Thanks
Can this be implemented on the yolov5 model? I am looking to run the yolov5s model.
Thanks for your great work,
I am wondering the meaning of the following code in yaml file, and how do you decide which Conv2d layer to prune and the number in line (e.g 0.4)
module_list.1.Conv2d.weight:
0.4
Finally, what's the purpose of "yolov4dense.pt"(dense model)? Does it work as a pre-trained weight? What's the difference between it and yolov4.pt file that trained from other PyTorch version YOLOv4 project.
您好:
我在调试您的yolobile代码的时候,换用VOC数据集,但是当有一层shape是(75, 256)的时候出现了"the layer size is not divisible",想请教一下您该如何处理这种不能被block_size整除的情况呢?
Hello!
After reading your paper about Yolo, yolobile, I have a problem. The pruning method in this paper is to set the weight value of part to 0 to cut off the connection, so the size of the overall weight matrix is unchanged. The size of space storage be smaller after saving the model directly? I directly use pytorch's save function and find that the size of the model don’t changed. This problem bothers me for a long time. In the paper, it is mentioned that the sparse matrix format is used to save, but this part of the content is not found in the code. I hope you can help me, thank you!!!
when I run your .cfg and weight on NVIDIA Jetson TX2 , the speed is only 5 fps . But when i run Yolov3-tiny .cfg and .weight , the speed can reach 30 fps. May I ask you this reason? Thanks.
csdarknet53s-panet-spp.cfg和yolov4的config不一致,用yolov4dense.pt计算出的mAP为45.7(python test.py --img-size 320 --batch-size 64 --device 0 --cfg cfg/csdarknet53s-panet-spp.cfg --weights weights/yolov4dense.pt --data data/coco2017.data),而yolov4 mAP仅为38.2。所以该模型不是yolov4,请问是什么模型呢?
hair drier 5e+03 11 0 0 0.0385 0
toothbrush 5e+03 57 0.335 0.386 0.337 0.359
Speed: 43.3/3.9/47.2 ms inference/NMS/total per 320x320 image at batch-size 64
COCO mAP with pycocotools...
WARNING: pycocotools must be installed with numpy==1.17 to run correctly. See cocodataset/cocoapi#356
i have install numpyt 1.19, should I degrade numpy to 1.17?
Hi, Thank you for approaching me efficient pruning method.
I implemented 1 of 2 pruning steps(Training & Masked retraining). While I train with train.py, 4 batch size, and 25 epoch, It implements 0-24 steps and again and again.. When will it stop itself? and what does this iteration means?
Thanks for your great work,I have a question, as you mentioned in another issue, the sparse ration of each layer is manually set, first set with same sparse ratio, and then changed layer by lay. But how to keep total sparse ratio same during the adjust?
请问可以告诉我这部分的代码优化在哪里么?
第一步训练生成的4个模型文件的大小为256M,第二步生成的模型文件大小为500M左右,请问为啥会折磨大呢,这不是要部署在手机上吗,这样大的模型文件怎麼部署呢,请作者看到后能不能回复一下,万分感谢
Hi, I have question about the prune ratios. How did you determine the prune ratios for different layers as specified in the configuration files such as "config_csdarknet53pan_v2.yaml"? There are so many layers and it seems infeasible to do trial and error to determine the ratios. I am sorry for the dumb question but I am relatively new in this kind of study.
I run:
detect.py --weights 'weights/best14x-49.pt' -- img-size 512 --> runing time (11ms on RTX2080ti)
detect.py --weights 'weights/best8x-514.pt -- img-size 512 --> runing time (11ms on RTX2080ti)
detect.py --weights 'weights/yolov4dense.pt ' -- img-size 512 --> runing time (11ms on RTX2080ti)
But when using check_compression.py, I see that FLOPS of these weights is still good
I just pip install -U -r requirements.txt, without docker build
So can you explain to me about this problem?
Hi, I saw you have several lines in admm.py at which you reshape the weight. For example, at line 212:
weight = weight.reshape(shape)
should it be weight = weight2d.reshape(shape) ?
I made my own datasets which had only single class.Then I changed csdarknet53s-panet-spp.cfg correspondingly. When I tried pruning,the initial weights were not compatible. So how can I get compatible initial weights? And I don't know how to do training or pruning with no initial weights. Should I change pruning config either?
In general, When I try to train my own single-class datasets, what should I pay attention to?
I am a new hand, sorry for the dump question.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.