Giter Site home page Giter Site logo

rangilyu / nanodet Goto Github PK

View Code? Open in Web Editor NEW
5.5K 66.0 1.0K 5.29 MB

NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥

License: Apache License 2.0

Java 4.59% CMake 0.46% C++ 18.48% Python 76.46%
deep-neural-networks deep-learning object-detection anchor-free ncnn shufflenet pytorch mnn repvgg openvino

nanodet's Introduction

nanodet's People

Contributors

acherstyx avatar blainwu avatar caishanli avatar cansik avatar jedi007 avatar nihui avatar nosaydomore avatar raember avatar rangilyu avatar ruicx avatar shawn-tao avatar st235 avatar stiansel avatar strawberrypie avatar tiiiktak avatar tpoisonooo avatar tuduweb avatar wwdok avatar zchrissirhcz avatar zero0kiriyu avatar zheqiushui avatar zhiqwang avatar zshn25 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nanodet's Issues

Add support for Windows

  • Windows support to Gloo
  • Windows supports for DistributedDataParallel
    Pls check and make some test

ResNet50+FPN

你好,可不可以提供一个ResNet50+FPN的Config

训练时出现TypeError: object of type 'NoneType' has no len()

[root][11-25 09:57:37]INFO:Using Tensorboard, logs will be saved in iGuard-v1-images-0-484-coco/logs
[root][11-25 09:57:37]INFO:Creating model...
model size is 1.0x
init weights...
=> loading pretrained model https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth
Finish initialize Lite GFL Head.
[root][11-25 09:57:37]INFO:Setting up data...
Traceback (most recent call last):
File "tools/train.py", line 92, in
main(args)
File "tools/train.py", line 72, in main
pin_memory=True, collate_fn=collate_function, drop_last=True)
File "/home/lw/anaconda3/envs/nanodet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 224, in init
sampler = RandomSampler(dataset, generator=generator)
File "/home/lw/anaconda3/envs/nanodet/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 94, in init
if not isinstance(self.num_samples, int) or self.num_samples <= 0:
File "/home/lw/anaconda3/envs/nanodet/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 102, in num_samples
return len(self.data_source)
TypeError: object of type 'NoneType' has no len()

大佬帮忙看下

关于在CPU上测试速度差异极大的问题

在我自己的数据集上,两台同样配置的电脑,都同时装有ubantu系统与windows10系统,经过测试。
同样的权值文件,通过CPU进行测试,ubantu系统的测试时间均为0.2s/张图片,windows上为0.7s/张图片,这个咋个解决呢,我们需要在windows的cpu上速度能达到ubantu上的速度。

自己数据集,训练中的损失问题

我自己的一个4 classes 的数据集有1万张,
第一轮:warmup|Iter(1/300)| lr:1.44e-02| loss_qfl:0.4034| loss_bbox:1.4174| loss_dfl:0.5207|
停止训练时:train|Epoch100/300|Iter66238(600/663)| lr:1.40e-07| loss_qfl:0.3369| loss_bbox:0.6454| loss_dfl:0.2716|

我想问一开始的时候损失就已经降得这么低,这个情况合理吗?
验证集的ap也很低
感谢您的贡献

Android build error

Build command failed
Error while executing process /home/c/Android/Sdk/cmake/3.10.2.4988404/bin/ninja with arguments {-C /home/c/projects/nandet/nandet-main/demo_android_ncnn/app/.cxx/cmake/debug/armeabi-v7a yolov5}
ninja:Entering directory '/home/c/projects/nandet/nandet-main/demo_android_ncnn/app/.cxx/cmake/debug/armeabi-v7a
ninja:error:'/home/c/projects/nandet/nandet-main/demo_android_ncnn/app/src/main/cpp/ncnnvulkan/armeabi-v7a/libncnn.a',needed by '/home/c/projects/nandet/nandet-main/demo_android_ncnn/app/build/intermediates/cmake/debug/obg/armeabi-v7a/libyolov5.so',missing and no known rules to make it

how can I convert model parameters to .param and .bin files

Thanks for sharing. I have test the model in android studio, and it works well. But I have a question, when I trained my model with extension .pth. How can I convert the model file into .param and .bin files.

It may be a easy question, but I am a pytorch starter. Looking forward your reply.

Several problems on trying to run the demo

Hello,
since I can not get the demo to run I am asking here:
I run the command ([...] is just the path to that folder before with my personal name, so I leave it out):
python demo/demo.py image --config [...]/nanodet/config/nanodet-m.yml --model [...]/nanodet/nanodet_m/archive/data.pkl --path [...]/data/dog.jpg

Then I get the follwing errors:

model size is 1.0x
init weights...
=> loading pretrained model https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth
Finish initialize Lite GFL Head.
Traceback (most recent call last):
File "demo/demo.py", line 106, in
main()
File "demo/demo.py", line 80, in main
predictor = Predictor(cfg, args.model, logger, device='cuda:0')
File "demo/demo.py", line 31, in init
ckpt = torch.load(model_path, map_location=lambda storage, loc: storage)
File "[...]/anaconda3/envs/nanodet/lib/python3.8/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "[...]/anaconda3/envs/nanodet/lib/python3.8/site-packages/torch/serialization.py", line 764, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.

I followed all of the installation steps strictly. So I ran python setup.py develop and so on exactly as told. And after checking if the model file and so on are not empty I got this error. But since I am somewhat new to that I am now at a point where I can't resolve that issue myself. Can somebody help me to get at least a demo run?

Is there whole Model architecture image?

Thanks for your great work !!!
Do you have the whole Netron model architecture image as you post on zhihu?

This work really cut down so much weight size without losing its mAP.

Awesome work!

加载param失败

在Windows10 用vs2017和cmake编译总是加载不了param ,路径改成绝对路径和相对路径都不行 求大神指引

关于NMS的效率问题

刚开始看nanodet的相关论文
对于hxw的feature上,FCOS实际上预测了HXWXC的一个类别概率,还有一个center-ness,这个center-ness的定义实际上有点儿类似centernet的heatmap真值,在真值目标框中心点周围算一个高斯分布,centernet是在HXWXC的heatmap上通过在做了一个3x3的max-pooling来替代NMS
那问题来了,是不是可以通过HXWXC的类别预测 x HXWX1的centerness,然后在这上面做max-pooling来替代后处理中耗时的NMS计算呢?

关于BN

你好, 我想问一下在推理的时候,BN和卷积层合并具体是怎么实现的呢? 相关代码在哪里啊
谢谢

COCO2017、70個epochs下mAP比預期低、Model Size比釋出的來的高

您好:

我有嘗試自己用COCO 2017的dataset進行訓練,320*320的解析度,訓練了70個epochs後觀察workspace資料夾中的nanodet_m/model_best/eval_results.txt,發現似乎不如您在README表格中提及的數值mAP=20.6

除此之外,訓練好的*.pth檔案是7.55MB,跟您在這裡提供的*.pth(僅3.86MB)有不小的落差,想請問我設定上,是不是哪裡出了問題,謝謝

Epoch:10
mAP: 0.10865674775150695
AP_50: 0.20425588097310773
AP_75: 0.10198967672602136
AP_small: 0.03051569507303643
AP_m: 0.1003268896843503
AP_l: 0.19166207801265334
Epoch:20
mAP: 0.11907455947537404
AP_50: 0.21912647960675255
AP_75: 0.1133359858799472
AP_small: 0.03340540840588444
AP_m: 0.10963350183214637
AP_l: 0.20611515962059668
Epoch:30
mAP: 0.12041569311457662
AP_50: 0.21872472727858525
AP_75: 0.11812772355005044
AP_small: 0.04121160130779743
AP_m: 0.11739734701210236
AP_l: 0.2094107062858298
Epoch:40
mAP: 0.1270034610457148
AP_50: 0.23137233492196027
AP_75: 0.12315771574507514
AP_small: 0.03728759301893687
AP_m: 0.11945066406385116
AP_l: 0.21847579864161104
Epoch:50
mAP: 0.1795337038572003
AP_50: 0.30874297066148143
AP_75: 0.17828656967793036
AP_small: 0.04914104329258958
AP_m: 0.16759143776161284
AP_l: 0.30522657762471966
Epoch:60
mAP: 0.1891646527258355
AP_50: 0.3217886198745358
AP_75: 0.1891903840608356
AP_small: 0.051945493440277546
AP_m: 0.17284699119758662
AP_l: 0.32539013473740197
Epoch:70
mAP: 0.18977107054907716
AP_50: 0.32244417711086515
AP_75: 0.19009949009338137
AP_small: 0.05201577054132174
AP_m: 0.1732704792902566
AP_l: 0.3251865578308541
Epoch:10
mAP: 0.12257141649340739
AP_50: 0.23619860654189756
AP_75: 0.1111660380944831
AP_small: 0.046249899529588356
AP_m: 0.13267609318317786
AP_l: 0.1919272466354749
Epoch:20
mAP: 0.14031381436801635
AP_50: 0.264269010783403
AP_75: 0.13001336222809526
AP_small: 0.05716782537295004
AP_m: 0.15169727664897573
AP_l: 0.22203464258839573
Epoch:30
mAP: 0.14217658101223368
AP_50: 0.26543726016697605
AP_75: 0.13404657580752513
AP_small: 0.05421967007099442
AP_m: 0.14589956171388785
AP_l: 0.22164060066991223
Epoch:40
mAP: 0.1515624729886679
AP_50: 0.2795920635444821
AP_75: 0.14477444138409137
AP_small: 0.05784293314338036
AP_m: 0.16267972648392726
AP_l: 0.234052709160686
Epoch:50
mAP: 0.18045894932158196
AP_50: 0.3248739247557717
AP_75: 0.17422379505767935
AP_small: 0.0711162443123105
AP_m: 0.18759840898213975
AP_l: 0.2749215457539428
Epoch:60
mAP: 0.18356200279853221
AP_50: 0.3270843667595295
AP_75: 0.1799592321268877
AP_small: 0.0711201510925709
AP_m: 0.19149197422993125
AP_l: 0.2810718905773653
Epoch:70
mAP: 0.18417714373096364
AP_50: 0.32818039011499023
AP_75: 0.18012643184294952
AP_small: 0.07166987560610152
AP_m: 0.1930917055587776
AP_l: 0.2815969931730256

nanodet-m.yml

#Config File example
save_dir: workspace/nanodet_m
model:
  arch:
    name: GFL
    backbone:
      name: ShuffleNetV2
      model_size: 1.0x
      out_stages: [2,3,4]
      activation: LeakyReLU
    fpn:
      name: PAN
      in_channels: [116, 232, 464]
      out_channels: 96
      start_level: 0
      num_outs: 3
    head:
      name: NanoDetHead
      num_classes: 80
      input_channel: 96
      feat_channels: 96
      stacked_convs: 2
      share_cls_reg: True
      octave_base_scale: 5
      scales_per_octave: 1
      strides: [8, 16, 32]
      reg_max: 7
      norm_cfg:
        type: BN
      loss:
        loss_qfl:
          name: QualityFocalLoss
          use_sigmoid: True
          beta: 2.0
          loss_weight: 1.0
        loss_dfl:
          name: DistributionFocalLoss
          loss_weight: 0.25
        loss_bbox:
          name: GIoULoss
          loss_weight: 2.0
data:
  train:
    name: coco
    img_path: ../coco/images/train2017
    ann_path: ../coco/annotations/instances_train2017.json
    input_size: [320,320] #[w,h]
    keep_ratio: True
    pipeline:
      perspective: 0.0
      scale: [0.6, 1.4]
      stretch: [[1, 1], [1, 1]]
      rotation: 0
      shear: 0
      translate: 0
      flip: 0.5
      brightness: 0.2
      contrast: [0.8, 1.2]
      saturation: [0.8, 1.2]
      normalize: [[103.53, 116.28, 123.675], [57.375, 57.12, 58.395]]
  val:
    name: coco
    img_path: ../coco/images/val2017
    ann_path: ../coco/annotations/instances_val2017.json
    input_size: [416,416] #[w,h]
    keep_ratio: True
    pipeline:
      normalize: [[103.53, 116.28, 123.675], [57.375, 57.12, 58.395]]
device:
  gpu_ids: [0,1,2,3]
  workers_per_gpu: 12
  batchsize_per_gpu: 160
schedule:
#  resume:
#  load_model: YOUR_MODEL_PATH
  optimizer:
    name: SGD
    lr: 0.14
    momentum: 0.9
    weight_decay: 0.0001
  warmup:
    name: linear
    steps: 300
    ratio: 0.1
  total_epochs: 70
  lr_schedule:
    name: MultiStepLR
    milestones: [40,55,60,65]
    gamma: 0.1
  val_intervals: 10
evaluator:
  name: CocoDetectionEvaluator
  save_key: mAP

log:
  interval: 10

class_names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
              'train', 'truck', 'boat', 'traffic_light', 'fire_hydrant',
              'stop_sign', 'parking_meter', 'bench', 'bird', 'cat', 'dog',
              'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe',
              'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
              'skis', 'snowboard', 'sports_ball', 'kite', 'baseball_bat',
              'baseball_glove', 'skateboard', 'surfboard', 'tennis_racket',
              'bottle', 'wine_glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
              'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot',
              'hot_dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
              'potted_plant', 'bed', 'dining_table', 'toilet', 'tv', 'laptop',
              'mouse', 'remote', 'keyboard', 'cell_phone', 'microwave',
              'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock',
              'vase', 'scissors', 'teddy_bear', 'hair_drier', 'toothbrush']

fps

I have applied your method(model/param) in my honor,tested fps was 10,different from your paper,can you tell me why?

original pytorch or onnx model

Could you please provide pretrained pytorch or onnx model weights also? I noticed you only shared converted ncnn models, but I would like to see the speed of inference on gpu/npu accelerated systems

pth文件转onnx文件出错

rt
环境:
win10
pytorch 1.7.0
onnx-simplifier 0.2.19
protobuf 3.14.0
执行python -m onnxsim ./nanodet_m.pth ./nanodet.onnx 报错:google.protobuf.message.DecodeError: Field number 0 is illegal.

when I changed the warmup steps from 300 to 30, an error occurred.

Traceback (most recent call last):
  File "tools/train.py", line 92, in <module>
    main(args)
  File "tools/train.py", line 87, in main
    trainer.run(train_dataloader, val_dataloader, evaluator)
  File "/zzz/prj/nanodet/nanodet/trainer/trainer.py", line 123, in run
    self.warm_up(train_loader)
  File "/zzz/prj/nanodet/nanodet/trainer/trainer.py", line 191, in warm_up
    output, loss, loss_stats = self.run_step(model, batch)
  File "/zzz/prj/nanodet/nanodet/trainer/trainer.py", line 57, in run_step
    loss.backward()
  File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Android Demo does not support front camera

I modify public static CameraX.LensFacing CAMERA_ID = CameraX.LensFacing.**BACK**; to public static CameraX.LensFacing CAMERA_ID = CameraX.LensFacing.**FRONT**;, the App crashed with below debug log:
image
How to solve it ? Thanks !

labels: txt-->xml-->json

labels: txt-->xml-->json
This project support to Nanodet project to make official labels.
https://github.com/eeric/yolo2voc2coco

successfully, as following:
[root][12-01 02:42:11]INFO:val|Epoch2/70|Iter123(4940/4952)| lr:1.40e-01| loss_qfl:0.6564| loss_bbox:0.9067| loss_dfl:0.3167|
[root][12-01 02:42:11]INFO:val|Epoch2/70|Iter123(4942/4952)| lr:1.40e-01| loss_qfl:0.3882| loss_bbox:0.7180| loss_dfl:0.2757|
[root][12-01 02:42:12]INFO:val|Epoch2/70|Iter123(4944/4952)| lr:1.40e-01| loss_qfl:0.2997| loss_bbox:0.7028| loss_dfl:0.2815|
[root][12-01 02:42:12]INFO:val|Epoch2/70|Iter123(4946/4952)| lr:1.40e-01| loss_qfl:0.4795| loss_bbox:0.5486| loss_dfl:0.2645|
[root][12-01 02:42:13]INFO:val|Epoch2/70|Iter123(4948/4952)| lr:1.40e-01| loss_qfl:0.4833| loss_bbox:0.8868| loss_dfl:0.2684|
[root][12-01 02:42:14]INFO:val|Epoch2/70|Iter123(4950/4952)| lr:1.40e-01| loss_qfl:0.5885| loss_bbox:0.7382| loss_dfl:0.3275|
Loading and preparing results...
DONE (t=3.01s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
Loading and preparing results...
DONE (t=2.72s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=46.62s).
Accumulating evaluation results...
DONE (t=13.37s).

[root][12-02 23:32:20]INFO:distributed sampler set epoch at 25
DONE (t=8.54s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.120
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.218
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.116
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.012
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.080
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.198
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.157
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.253
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.269
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.036
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.233
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.419

Epoch:20
mAP: 0.12080412456618998
AP_50: 0.22069881458115911
AP_75: 0.11838745578996017
AP_small: 0.01093114820200795
AP_m: 0.0804470469625459
AP_l: 0.20172520300288083

Consider fix `voc2coco` link

In the readme.md file voc2coco is mentioned but its link is broken

There are several repos called voc2coco in github

We may consider fix this link

使用 demo.py 預測影片時發生 Upsampling 錯誤

python demo/demo.py video --config config/nanodet-m.yml --model model/nanodet_m.pth --path test.mp4

torch == 1.6.0
python == 3.8.5
cuda == 10.2

作者您好,我在使用 demo.py 時發生以下錯誤訊息:

/home/wwcong/.conda/envs/nanodet/lib/python3.8/site-packages/torch/nn/functional.py:3118: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  warnings.warn("Default upsampling behavior when mode={} is changed "
qt.qpa.xcb: could not connect to display 
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/wwcong/.conda/envs/nanodet/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

[1]    12580 abort (core dumped)  python demo/demo.py video --config config/nanodet-m.yml --model  --path 
(nanodet)

查了網路上是否有相同錯誤,發現都是 nn.Upsampling 參數中需要設定為 align_corners=True,但是我在所有程式碼中找不到 nn.Upsampling 函數,想請教作者是否能提點一下

training details

may i ask your training details?
i am planing to training coco with mosaic.

How to use onnx model

Thank you for your nice project.It helps my project a lot.Can you share the code to use onnx model?

yolo format!

It is not friendly to train our own data set to use COCO format, can you add VOC or YOLO format dataloader, thanks

无法成功加载ncnn模型

报错:
network graph not ready
find_blob_index_by_name input.1 failed
find_blob_index_by_name 795 failed
find_blob_index_by_name 792 failed

error when I run demo_ncnn

OS:win10 x64
VS: 2015 x32

I followed the guideline to build C++ project. when I run the cmake .. command to complie nanodet_demo.vcxproj project, the CMD shows:

E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015>cmake ..
-- Building for: Visual Studio 14 2015
-- Selecting Windows SDK version  to target Windows 10.0.18363.
-- The C compiler identification is MSVC 19.0.23026.0
-- The CXX compiler identification is MSVC 19.0.23026.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: D:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: D:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
OPENMP FOUND
-- Found OpenCV: D:/deploy_tools/opencv/build (found version "4.5.0")
-- Found Vulkan: D:/deploy_tools/Vulkan/1.2.154.1/Lib32/vulkan-1.lib
-- Vulkan FOUND = TRUE
-- Vulkan Include = D:/deploy_tools/Vulkan/1.2.154.1/Include
-- Vulkan Lib = D:/deploy_tools/Vulkan/1.2.154.1/Lib32/vulkan-1.lib
-- Configuring done
-- Generating done
-- Build files have been written to: E:/03personal/DeepLearning/nanodet/demo_ncnn/build-vs2015

From this infomation, I notice that I mix the x64/x86 arch on my computer. Then I runned command msbuild nanodet_demo.vcxproj /p:configuration=release /p:platform=x64, CMD shows as below:

E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015>msbuild nanodet_demo.vcxproj /p:configuration=release /p:platform=x64
Microsoft (R) 生成引擎版本 14.0.23107.0
版权所有(C) Microsoft Corporation。保留所有权利。

生成启动时间为 2020/12/1 下午 4:59:59。
节点 1 上的项目“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\nanodet_demo.vcxproj”(默认目标)。
项目“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\nanodet_demo.vcxproj”(1)正在节点 1 上生成“E:\03personal\DeepLearn
ing\nanodet\demo_ncnn\build-vs2015\ZERO_CHECK.vcxproj”(2) (默认目标)。
C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\Microsoft.Cpp.Platform.targets(55,5): error MSB8020: The build t
ools for Visual Studio 2010 (Platform Toolset = 'v100') cannot be found. To build using the v100 build tools, please in
stall Visual Studio 2010 build tools.  Alternatively, you may upgrade to the current Visual Studio tools by selecting t
he Project menu or right-click the solution, and then selecting "Retarget solution". [E:\03personal\DeepLearning\nanode
t\demo_ncnn\build-vs2015\ZERO_CHECK.vcxproj]
已完成生成项目“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\ZERO_CHECK.vcxproj”(默认目标)的操作 - 失败。

已完成生成项目“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\nanodet_demo.vcxproj”(默认目标)的操作 - 失败 。


生成失败。

“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\nanodet_demo.vcxproj”(默认目标) (1) ->
“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\ZERO_CHECK.vcxproj”(默认目标) (2) ->
(PlatformPrepareForBuild 目标) ->
  C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V140\Microsoft.Cpp.Platform.targets(55,5): error MSB8020: The build
 tools for Visual Studio 2010 (Platform Toolset = 'v100') cannot be found. To build using the v100 build tools, please
install Visual Studio 2010 build tools.  Alternatively, you may upgrade to the current Visual Studio tools by selecting
 the Project menu or right-click the solution, and then selecting "Retarget solution". [E:\03personal\DeepLearning\nano
det\demo_ncnn\build-vs2015\ZERO_CHECK.vcxproj]

I did not place the problem, but I tried compile for x86 arch since above infomation: msbuild nanodet_demo.vcxproj /p:configuration=release /p:platform=x86

.......
ncnn.lib(benchmark.cpp.obj) : fatal error LNK1112: 模块计算机类型“x64”与目标计算机类型“X86”冲突 [E:\03personal\DeepLearning\nanodet\demo
_ncnn\build-vs2015\nanodet_demo.vcxproj]
已完成生成项目“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\nanodet_demo.vcxproj”(默认目标)的操作 - 失败 。
生成失败。
“E:\03personal\DeepLearning\nanodet\demo_ncnn\build-vs2015\nanodet_demo.vcxproj”(默认目标) (1) ->
(Link 目标) ->
  ncnn.lib(benchmark.cpp.obj) : fatal error LNK1112: 模块计算机类型“x64”与目标计算机类型“X86”冲突 [E:\03personal\DeepLearning\nanodet\de
mo_ncnn\build-vs2015\nanodet_demo.vcxproj]
    0 个警告
    1 个错误

已用时间 00:00:00.27

Is there anyone meet this problem? I guess the mixed arch x64/x86 may be the reason? Am I right?
I am trying to install an x64 VS2015, will it work?

num_classes for one class detection

Hi,

To train a detector for one class only, should I set num_classes as 1 or 2? (because I'm confusing with background class)
In my case, it is a dick detector. Should I set class_names as ['dick', 'background'] or just ['dick']?

I successfully start to train my model but appearing some errors related to the above things.

Thanks so much for your helps!
Me,

测试mAP

目前给出的mAP计算指标是使用pycocotool API计算得到的AP50和AP75,但是无法得到各个类别的AP指标,不知博主能否开发计算各个类别的AP指标的脚本哈

自己的训练集结果很糟糕

您好,我更换了骨干网络vovnet-19,pan和head都没有变(类别设置为bdd100k提供的类别),不加载imagenet预训练参数的情况下直接训练,效果非常糟糕,70个epoch的ap只有18左右,而yolov5s同样的骨干能到40,想问一下大概是什么原因

运行demo.py时,出现了一个小问题.

我的运行环境:
cuda==10.1
pytorch==1.7
torchvision==0.8.0
当我运行"python demo/demo.py image --config CONFIG_PATH --model MODEL_PATH --path IMAGE_PATH",尝试推理图片时,
出现错误:
RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. 'torchvision::nms' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].

CPU: registered at /root/project/torchvision/csrc/vision.cpp:59 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
AutogradOther: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback]
AutogradCPU: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback]
AutogradCUDA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback]
AutogradXLA: fallthrough registered at /pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback]
Tracer: fallthrough registered at /pytorch/torch/csrc/jit/frontend/tracer.cpp:967 [backend fallback]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]

但是当我把:/nanodet/nanodet/model/module/nms.py batched_nms(boxes, scores, idxs, nms_cfg, class_agnostic=False)函数改后:

boxes_for_nms = boxes_for_nms.cpu()
scores = scores.cpu()
boxes = boxes.cpu()
split_thr = nms_cfg_.pop('split_thr', 10000)
if len(boxes_for_nms) < split_thr:
    # dets, keep = nms_op(boxes_for_nms, scores, **nms_cfg_)
    keep = nms(boxes_for_nms, scores, **nms_cfg_)
    boxes = boxes[keep]
    # scores = dets[:, -1]
    scores = scores[keep]

demo.py正常运行.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.