Giter Site home page Giter Site logo

nightsnack / yolobile Goto Github PK

View Code? Open in Web Editor NEW
363.0 363.0 98.0 7.67 MB

This is the implementation of YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design

License: GNU General Public License v3.0

Dockerfile 0.86% Python 97.12% Shell 2.03%
deep-learning object-detection yolov4

yolobile's People

Contributors

changhsinlee avatar d-j-kendall avatar developer0hye avatar dsuess avatar falaktheoptimist avatar fatihbaltaci avatar franciscoreveriano avatar gabrielbianconi avatar glenn-jocher avatar googlewiki avatar guigarfr avatar idow09 avatar ilyaovodov avatar jas-nat avatar jrmh96 avatar jveitchmichaelis avatar lincoce avatar linzzzzzz avatar lukeai avatar nanocode012 avatar nirzarrabi avatar ownmarc avatar perry0418 avatar roulbac avatar skalskip avatar tjiagom avatar tshead2 avatar ttayu avatar wang-xinyu avatar yang-jin-hai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolobile's Issues

Google drive link is not working

The drive link given in model checkpoints is not working. Another link was given there but Baidu is not supported in my country.

Question on config_csdarknet53pan_v*.yaml and yolov4dense.pt

Thanks for your great work,

I am wondering the meaning of the following code in yaml file, and how do you decide which Conv2d layer to prune and the number in line (e.g 0.4)

module_list.1.Conv2d.weight:
    0.4

Finally, what's the purpose of "yolov4dense.pt"(dense model)? Does it work as a pre-trained weight? What's the difference between it and yolov4.pt file that trained from other PyTorch version YOLOv4 project.

不能被block_size整除

您好:
我在调试您的yolobile代码的时候,换用VOC数据集,但是当有一层shape是(75, 256)的时候出现了"the layer size is not divisible",想请教一下您该如何处理这种不能被block_size整除的情况呢?

about model size

Hello!

After reading your paper about Yolo, yolobile, I have a problem. The pruning method in this paper is to set the weight value of part to 0 to cut off the connection, so the size of the overall weight matrix is unchanged. The size of space storage be smaller after saving the model directly? I directly use pytorch's save function and find that the size of the model don’t changed. This problem bothers me for a long time. In the paper, it is mentioned that the sparse matrix format is used to save, but this part of the content is not found in the code. I hope you can help me, thank you!!!

The problem of speed in TX2

when I run your .cfg and weight on NVIDIA Jetson TX2 , the speed is only 5 fps . But when i run Yolov3-tiny .cfg and .weight , the speed can reach 30 fps. May I ask you this reason? Thanks.

请问csdarknet53s-panet-spp.cfg是什么模型?

csdarknet53s-panet-spp.cfg和yolov4的config不一致,用yolov4dense.pt计算出的mAP为45.7(python test.py --img-size 320 --batch-size 64 --device 0 --cfg cfg/csdarknet53s-panet-spp.cfg --weights weights/yolov4dense.pt --data data/coco2017.data),而yolov4 mAP仅为38.2。所以该模型不是yolov4,请问是什么模型呢?

problem when compute COCO mAP

      hair drier     5e+03        11         0         0    0.0385         0
      toothbrush     5e+03        57     0.335     0.386     0.337     0.359

Speed: 43.3/3.9/47.2 ms inference/NMS/total per 320x320 image at batch-size 64

COCO mAP with pycocotools...
WARNING: pycocotools must be installed with numpy==1.17 to run correctly. See cocodataset/cocoapi#356

i have install numpyt 1.19, should I degrade numpy to 1.17?

train.py

Hi, Thank you for approaching me efficient pruning method.

I implemented 1 of 2 pruning steps(Training & Masked retraining). While I train with train.py, 4 batch size, and 25 epoch, It implements 0-24 steps and again and again.. When will it stop itself? and what does this iteration means?

how to keep total sparse ratio in a certain number

Thanks for your great work,I have a question, as you mentioned in another issue, the sparse ration of each layer is manually set, first set with same sparse ratio, and then changed layer by lay. But how to keep total sparse ratio same during the adjust?

模型文件大

第一步训练生成的4个模型文件的大小为256M,第二步生成的模型文件大小为500M左右,请问为啥会折磨大呢,这不是要部署在手机上吗,这样大的模型文件怎麼部署呢,请作者看到后能不能回复一下,万分感谢

prune ratios

Hi, I have question about the prune ratios. How did you determine the prune ratios for different layers as specified in the configuration files such as "config_csdarknet53pan_v2.yaml"? There are so many layers and it seems infeasible to do trial and error to determine the ratios. I am sorry for the dumb question but I am relatively new in this kind of study.

Problem with running get_coco2014.sh

When I executed the get_coco2014.sh script, the following message appeared, possibly indicating that there are issues with the file downloaded from Google Drive. It seems that the file on Google Drive does not exist. Is there any way to resolve this?
image

Speeds of (GPU 8x, 14x and yolov4dense) running on desktop GPU (RTX2080Ti) are same

I run:
detect.py --weights 'weights/best14x-49.pt' -- img-size 512 --> runing time (11ms on RTX2080ti)
detect.py --weights 'weights/best8x-514.pt -- img-size 512 --> runing time (11ms on RTX2080ti)
detect.py --weights 'weights/yolov4dense.pt ' -- img-size 512 --> runing time (11ms on RTX2080ti)

But when using check_compression.py, I see that FLOPS of these weights is still good

I just pip install -U -r requirements.txt, without docker build

So can you explain to me about this problem?

problem when I tried to train my single-class datasets

I made my own datasets which had only single class.Then I changed csdarknet53s-panet-spp.cfg correspondingly. When I tried pruning,the initial weights were not compatible. So how can I get compatible initial weights? And I don't know how to do training or pruning with no initial weights. Should I change pruning config either?
In general, When I try to train my own single-class datasets, what should I pay attention to?
I am a new hand, sorry for the dump question.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.