Giter Site home page Giter Site logo

open-mmlab / mmrotate Goto Github PK

View Code? Open in Web Editor NEW
1.8K 19.0 519.0 20.75 MB

OpenMMLab Rotated Object Detection Toolbox and Benchmark

Home Page: https://mmrotate.readthedocs.io/en/latest/

License: Apache License 2.0

Python 99.50% Shell 0.24% Dockerfile 0.26%
rotated-object pytorch openmmlab detection

mmrotate's Introduction

English | 简体中文

Introduction

MMRotate is an open-source toolbox for rotated object detection based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.6+.

video.MP4
Major Features
  • Support multiple angle representations

    MMRotate provides three mainstream angle representations to meet different paper settings.

  • Modular Design

    We decompose the rotated object detection framework into different components, which makes it much easy and flexible to build a new model by combining different modules.

  • Strong baseline and State of the art

    The toolbox provides strong baselines and state-of-the-art methods in rotated object detection.

What's New

Highlight

We are excited to announce our latest work on real-time object recognition tasks, RTMDet, a family of fully convolutional single-stage detectors. RTMDet not only achieves the best parameter-accuracy trade-off on object detection from tiny to extra-large model sizes but also obtains new state-of-the-art performance on instance segmentation and rotated object detection tasks. Details can be found in the technical report. Pre-trained models are here.

PWC PWC PWC

Task Dataset AP FPS(TRT FP16 BS1 3090)
Object Detection COCO 52.8 322
Instance Segmentation COCO 44.6 188
Rotated Object Detection DOTA 78.9(single-scale)/81.3(multi-scale) 121

0.3.4 was released in 01/02/2023:

  • Fix compatibility with numpy, scikit-learn, and e2cnn.
  • Support empty patch in Rotate Transform
  • use iof for RRandomCrop validation

Please refer to changelog.md for details and release history.

Installation

MMRotate depends on PyTorch, MMCV and MMDetection. Below are quick steps for installation. Please refer to Install Guide for more detailed instruction.

conda create -n open-mmlab python=3.7 pytorch==1.7.0 cudatoolkit=10.1 torchvision -c pytorch -y
conda activate open-mmlab
pip install openmim
mim install mmcv-full
mim install mmdet
git clone https://github.com/open-mmlab/mmrotate.git
cd mmrotate
pip install -r requirements/build.txt
pip install -v -e .

Get Started

Please see get_started.md for the basic usage of MMRotate. We provide colab tutorial, and other tutorials for:

Model Zoo

Results and models are available in the README.md of each method's config directory. A summary can be found in the Model Zoo page.

Supported algorithms:

Data Preparation

Please refer to data_preparation.md to prepare the data.

FAQ

Please refer to FAQ for frequently asked questions.

Contributing

We appreciate all contributions to improve MMRotate. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

MMRotate is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

Citation

If you use this toolbox or benchmark in your research, please cite this project.

@inproceedings{zhou2022mmrotate,
  title   = {MMRotate: A Rotated Object Detection Benchmark using PyTorch},
  author  = {Zhou, Yue and Yang, Xue and Zhang, Gefan and Wang, Jiabao and Liu, Yanyi and
             Hou, Liping and Jiang, Xue and Liu, Xingzhao and Yan, Junchi and Lyu, Chengqi and
             Zhang, Wenwei and Chen, Kai},
  booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
  year={2022}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM installs OpenMMLab packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMDeploy: OpenMMLab model deployment framework.

mmrotate's People

Contributors

abyssaledge avatar akmalulkhairin avatar chenmin00 avatar dangchuong-dc avatar grimoire avatar heiyuxiaokai avatar jamiechoi1995 avatar jbwang1997 avatar jinyuannn avatar jistiak avatar kv-chiu avatar lalalagogogochong avatar liufeinuaa avatar liuyanyi avatar liwentomng avatar matrixgame2018 avatar minkisong avatar nijkah avatar notplus avatar np-csu avatar rangeking avatar rangilyu avatar remi-or avatar sheffieldcao avatar sltlls avatar triple-mu avatar yangxue0827 avatar zhanggefan avatar zwwwayne avatar zytx121 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmrotate's Issues

RuntimeError: CUDA error: an illegal memory access was encountered


 File "/workspace/mmrotate/mmrotate/models/roi_heads/roi_extractors/rotate_single_level_roi_extractor.py", line 133, in forward
    roi_feats[inds] = roi_feats_t
RuntimeError: CUDA error: an illegal memory access was encountered
terminate called after throwing an instance of 'c10::Error'
  what():  CUDA error: an illegal memory access was encountered
Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1607370141920/work/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f16c35478b2 in /root/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7f16c3799982 in /root/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f16c3532b7d in /root/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x5fea0a (0x7f1700884a0a in /root/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x5feab6 (0x7f1700884ab6 in /root/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #23: __libc_start_main + 0xf0 (0x7f1724052830 in /lib/x86_64-linux-gnu/libc.so.6)

以上是出现报错的信息,卡是2080ti
下面是运行代码:

python tools/train.py ./configs/oriented_rcnn/oriented_rcnn_r50_fpn_1x_dota_le90.py --gpu-ids 5  --work-dir /workspace/mmrotate/work_dirs/ms/oriented_rcnn/0221

环境: cuda10.1, pytorch1.71

The installation of mmrotate will install an additional mmcv.

Describe the bug
The installation of mmrotate will install an additional mmcv although I have mmcv-full installed.

The output of the pip list command is as follows:

mmcv                1.4.5
mmcv-full           1.4.5
mmdet               2.21.0
mmrotate            0.1.0     

Reproduction

  1. What command or script did you run?
pip install -v -e .

Bug fix

Compared with other mmlab project, i think the difference may appeare in requirements/runtime.txt, maybe remove mmcv in runtime.txt can fix it.

ZeroDivisionError: division by zero when angle_version = 'le90'

model:
redet_re50_refpn_1x_dota_le90.py
dataset_type = 'DOTADataset'
angle_version = 'le90'

dataset:
no problem when use "r3det" with angle_version = 'oc' and dataset_type = 'DOTADataset'

ERROR Traceback
RuntimeError: DataLoader worker (pid 16773) is killed by signal: Terminated.
Traceback (most recent call last):
File "/home/ai/yumo/ship_det/mmrotate-main/train.py", line 182, in
main()
File "/home/ai/yumo/ship_det/mmrotate-main/train.py", line 171, in main
train_detector(
File "/home/ai/yumo/ship_det/mmrotate-main/mmrotate/apis/train.py", line 156, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train
self.call_hook('after_train_epoch')
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmcv/runner/base_runner.py", line 309, in call_hook
getattr(hook, fn_name)(self)
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch
self._do_evaluate(runner)
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmdet-2.21.0-py3.9.egg/mmdet/core/evaluation/eval_hooks.py", line 115, in _do_evaluate
results = multi_gpu_test(
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmdet-2.21.0-py3.9.egg/mmdet/apis/test.py", line 100, in multi_gpu_test
for i, data in enumerate(data_loader):
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 355, in iter
return self._get_iterator()
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 940, in init
self._reset(loader, first_iter=True)
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 971, in _reset
self._try_put_index()
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1205, in _try_put_index
index = self._next_index()
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 227, in iter
for idx in self.sampler:
File "/home/ai/anaconda3/envs/mmdet/lib/python3.9/site-packages/mmdet-2.21.0-py3.9.egg/mmdet/datasets/samplers/distributed_sampler.py", line 33, in iter
math.ceil(self.total_size / len(indices)))[:self.total_size]
ZeroDivisionError: division by zero

About gliding vertex

论文中的gliding vertex 似乎可以实现任意四边形的检测,而mmrotate中的gliding vertex算法只能检测规则的矩形框吗

Mixed Precision Training

Describe the feature

Motivation
Mixed precision training is a common feature of many OpenMMLab projects.
It should be easy to integrate it with MMRotate. We need to release some configs and their corresponding models to verify the feature and provide examples.

Chinese Documentation Translation

I ran test after training but it was stuck to compute mAP

Describe the bug
I finished the training of Rotated Faster-RCNN on Dota 1.5 train set.
I tried to run test on Dota 1.5 validation set and it seems like it got stuck computing mAP.

Reproduction

  1. What command or script did you run?
python tools/test.py \
    configs/rotated_faster_rcnn/DC_rotated_faster_rcnn_r50_fpn_1x_dota_le90.py \
    work_dirs/DC_rotated_faster_rcnn_r50_fpn_1x_dota_le90/latest.pth \
    --eval mAP

And

python tools/test.py \
    configs/rotated_faster_rcnn/DC_rotated_faster_rcnn_r50_fpn_1x_dota_le90.py \
    work_dirs/DC_rotated_faster_rcnn_r50_fpn_1x_dota_le90/latest.pth \
    --eval mAP \
    --eval-options nproc=1
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
    I did not change much. Except below to debug
    modification1

  2. What dataset did you use?
    DOTA 1.5

Environment
env1

Error traceback
traceback

Thank you in advanced!

RiRoIAlignRotated CPU inference implementation

Describe the feature
Hello MMRotate team, thank you for your great work, I am wondering if there is plan to implement CPU inference for RiRoIAlignRotated, just like RoIAlignRotated

Motivation
cpu_err1

Thank you!

mAP of s2anet under different batchsizes

I train the s2anet (fp16) in batchsize 2 and batchsize 8, and got a 3.7% difference on mAP. It's a little weird.

batchsize lr mAP
8 0.01 70.03
2 0.025 73.74

full config for bs8:

dataset_type = 'DOTADataset'
data_root = '/datasets/Dota_mmrotate/dota/'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(type='RResize', img_scale=(1024, 1024)),
    dict(
        type='RRandomFlip',
        flip_ratio=[0.25, 0.25, 0.25],
        direction=['horizontal', 'vertical', 'diagonal'],
        version='le135'),
    dict(
        type='Normalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1024, 1024),
        flip=False,
        transforms=[
            dict(type='RResize'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img'])
        ])
]
data = dict(
    samples_per_gpu=8,
    workers_per_gpu=8,
    train=dict(
        type='DOTADataset',
        ann_file='/datasets/Dota_mmrotate/dota/trainval/annfiles/',
        img_prefix='/datasets/Dota_mmrotate/dota/trainval/images/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadAnnotations', with_bbox=True),
            dict(type='RResize', img_scale=(1024, 1024)),
            dict(
                type='RRandomFlip',
                flip_ratio=[0.25, 0.25, 0.25],
                direction=['horizontal', 'vertical', 'diagonal'],
                version='le135'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
        ],
        version='le135'),
    val=dict(
        type='DOTADataset',
        ann_file='/datasets/Dota_mmrotate/dota/trainval/annfiles/',
        img_prefix='/datasets/Dota_mmrotate/dota/trainval/images/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1024, 1024),
                flip=False,
                transforms=[
                    dict(type='RResize'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='DefaultFormatBundle'),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        version='le135'),
    test=dict(
        type='DOTADataset',
        ann_file='/datasets/Dota_mmrotate/dota/test/images/',
        img_prefix='/datasets/Dota_mmrotate/dota/test/images/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(1024, 1024),
                flip=False,
                transforms=[
                    dict(type='RResize'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=32),
                    dict(type='DefaultFormatBundle'),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        version='le135'))
evaluation = dict(interval=12, metric='mAP', nproc=1)
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=0.3333333333333333,
    step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12)
checkpoint_config = dict(interval=4)
log_config = dict(
    interval=50,
    hooks=[dict(type='TextLoggerHook'),
           dict(type='TensorboardLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
fp16 = dict(loss_scale=dict(init_scale=512))
angle_version = 'le135'
model = dict(
    type='S2ANet',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        zero_init_residual=False,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch',
        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        start_level=1,
        add_extra_convs='on_input',
        num_outs=5),
    fam_head=dict(
        type='RotatedRetinaHead',
        num_classes=15,
        in_channels=256,
        stacked_convs=2,
        feat_channels=256,
        assign_by_circumhbbox=None,
        anchor_generator=dict(
            type='RotatedAnchorGenerator',
            scales=[4],
            ratios=[1.0],
            strides=[8, 16, 32, 64, 128]),
        bbox_coder=dict(
            type='DeltaXYWHAOBBoxCoder',
            angle_range='le135',
            norm_factor=1,
            edge_swap=False,
            proj_xy=True,
            target_means=(0.0, 0.0, 0.0, 0.0, 0.0),
            target_stds=(1.0, 1.0, 1.0, 1.0, 1.0)),
        loss_cls=dict(
            type='FocalLoss',
            use_sigmoid=True,
            gamma=2.0,
            alpha=0.25,
            loss_weight=1.0),
        loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)),
    align_cfgs=dict(
        type='AlignConv',
        kernel_size=3,
        channels=256,
        featmap_strides=[8, 16, 32, 64, 128]),
    odm_head=dict(
        type='ODMRefineHead',
        num_classes=15,
        in_channels=256,
        stacked_convs=2,
        feat_channels=256,
        assign_by_circumhbbox=None,
        anchor_generator=dict(
            type='PseudoAnchorGenerator', strides=[8, 16, 32, 64, 128]),
        bbox_coder=dict(
            type='DeltaXYWHAOBBoxCoder',
            angle_range='le135',
            norm_factor=1,
            edge_swap=False,
            proj_xy=True,
            target_means=(0.0, 0.0, 0.0, 0.0, 0.0),
            target_stds=(1.0, 1.0, 1.0, 1.0, 1.0)),
        loss_cls=dict(
            type='FocalLoss',
            use_sigmoid=True,
            gamma=2.0,
            alpha=0.25,
            loss_weight=1.0),
        loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)),
    train_cfg=dict(
        fam_cfg=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.5,
                neg_iou_thr=0.4,
                min_pos_iou=0,
                ignore_iof_thr=-1,
                iou_calculator=dict(type='RBboxOverlaps2D')),
            allowed_border=-1,
            pos_weight=-1,
            debug=False),
        odm_cfg=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.5,
                neg_iou_thr=0.4,
                min_pos_iou=0,
                ignore_iof_thr=-1,
                iou_calculator=dict(type='RBboxOverlaps2D')),
            allowed_border=-1,
            pos_weight=-1,
            debug=False)),
    test_cfg=dict(
        nms_pre=2000,
        min_bbox_size=0,
        score_thr=0.05,
        nms=dict(iou_thr=0.1),
        max_per_img=2000))
work_dir = './work_dirs/s2a_bs8_fp16'
auto_resume = False
gpu_ids = range(0, 1)

Some questions about mAP evaluation

Hello, I got confused about the mAP implementation in mmrotate. It seems that the fuction eval_map() hasn't changed the confidence score threshold to get different precision and recall values. Instead, it gets precision and recall values according to the number of dets.

def eval_map(det_results,

Actually, I've got a really high mAP value this way in my custom dataset, but the result doesn't match the visualization. Every class have really high ap but still have some wrong cases in visualization.

middle_img_v2_6888ea78-8f4e-4cac-9f9a-4b0d329b7eag

I train normally, but there is no validation process

I train normally, but there is no validation process

i use this shell:
python -m torch.distributed.launch --nproc_per_node 4 train.py ./configs/r3det/r3det_tiny_r50_fpn_1x_dota_oc.py --launcher pytorch --gpus 4

got:

2022-02-19 09:47:48,227 - mmrotate - INFO - workflow: [('train', 1)], max: 12 epochs
2022-02-19 09:47:48,227 - mmrotate - INFO - Checkpoints will be saved to /home/ai/yumo/ship_det/mmrotate-main/work_dirs/r3det_tiny_r50_fpn_1x_dota_oc by HardDiskBackend.
2022-02-19 09:48:13,565 - mmcv - INFO - Reducer buckets have been rebuilt in this iteration.
2022-02-19 09:48:32,743 - mmrotate - INFO - Epoch [1][50/100] lr: 9.967e-04, eta: 0:17:03, time: 0.890, data_time: 0.494, memory: 3223, s0.loss_cls: 1.0393, s0.loss_bbox: 0.3072, sr0.loss_cls: 0.9379, sr0.loss_bbox: 0.3254, loss: 2.6098, grad_norm: 4.0230
2022-02-19 09:48:52,601 - mmrotate - INFO - Exp name: r3det_tiny_r50_fpn_1x_dota_oc.py
2022-02-19 09:48:52,602 - mmrotate - INFO - Epoch [1][100/100] lr: 1.163e-03, eta: 0:11:48, time: 0.397, data_time: 0.006, memory: 3224, s0.loss_cls: 0.5890, s0.loss_bbox: 0.2449, sr0.loss_cls: 0.4457, sr0.loss_bbox: 0.2588, loss: 1.5385, grad_norm: 4.9931
2022-02-19 09:49:37,768 - mmrotate - INFO - Epoch [2][50/100] lr: 1.330e-03, eta: 0:12:40, time: 0.885, data_time: 0.490, memory: 3224, s0.loss_cls: 0.4540, s0.loss_bbox: 0.2310, sr0.loss_cls: 0.3741, sr0.loss_bbox: 0.2568, loss: 1.3159, grad_norm: 4.4678
2022-02-19 09:49:57,411 - mmrotate - INFO - Exp name: r3det_tiny_r50_fpn_1x_dota_oc.py
2022-02-19 09:49:57,411 - mmrotate - INFO - Epoch [2][100/100] lr: 1.497e-03, eta: 0:10:41, time: 0.393, data_time: 0.006, memory: 3224, s0.loss_cls: 0.3914, s0.loss_bbox: 0.2394, sr0.loss_cls: 0.3382, sr0.loss_bbox: 0.2556, loss: 1.2246, grad_norm: 4.6190
2022-02-19 09:50:42,743 - mmrotate - INFO - Epoch [3][50/100] lr: 1.663e-03, eta: 0:10:56, time: 0.893, data_time: 0.492, memory: 3224, s0.loss_cls: 0.3672, s0.loss_bbox: 0.2198, sr0.loss_cls: 0.3200, sr0.loss_bbox: 0.2618, loss: 1.1688, grad_norm: 4.4310

'RoIAlignRotated' object has no attribute 'out_size',官网好像也没有相关的文档

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

Roadmap of MMRotate

We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.

You can either:

Suggest a new feature by leaving a comment.
Vote for a feature request with 👍 or be against with 👎. (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!)
Tell us that you would like to help implement one of the features in the list or review the PRs. (This is the greatest things to hear about!)

We also released our TODO list on the project page. Most of the TODO items are described in their corresponding issues (those labeled by Dev-RD) with detailed requirement documentation. Feel free to leave a message in the issue of any item and create a PR if you are interested in any of the item.

复现demo出现ModuleNotFoundError: No module named 'mmcv._ext'

我的版本
Python 3.9.7
mmcv 1.4.6
mmdet 2.22.0
mmrotate 0.1.0
torch 1.7.1+cu110
torchaudio 0.7.2
torchvision 0.8.2+cu110
nvcc -V显示 Cuda compilation tools, release 11.0, V11.0.194
nvidia-smi 显示 CUDA Version: 11.1

通过 1.pip install openmim 2.mim install mmrotate 命令,顺利安装成功。
创建checkpoint文件夹存放预训练模型,
测试命令 python demo/image_demo.py demo/demo.jpg configs/oriented_rcnn/oriented_rcnn_r50_fpn_1x_dota_le90.py checkpoint/oriented_rcnn_r50_fpn_1x_dota_le90-6d2b2ce0.pth demo/vis.jpg device 6

报错
Traceback (most recent call last):
File "/home/C/LUOLIE/mmrotate-main/demo/image_demo.py", line 14, in
from mmdet.apis import inference_detector, init_detector
File "/home/zhuyu/anaconda3/envs/mmmlab/lib/python3.9/site-packages/mmdet/apis/init.py", line 2, in
from .inference import (async_inference_detector, inference_detector,
File "/home/zhuyu/anaconda3/envs/mmmlab/lib/python3.9/site-packages/mmdet/apis/inference.py", line 7, in
from mmcv.ops import RoIPool
File "/home/zhuyu/anaconda3/envs/mmmlab/lib/python3.9/site-packages/mmcv/ops/init.py", line 2, in
from .active_rotated_filter import active_rotated_filter
File "/home/zhuyu/anaconda3/envs/mmmlab/lib/python3.9/site-packages/mmcv/ops/active_rotated_filter.py", line 8, in
ext_module = ext_loader.load_ext(
File "/home/zhuyu/anaconda3/envs/mmmlab/lib/python3.9/site-packages/mmcv/utils/ext_loader.py", line 13, in load_ext
ext = importlib.import_module('mmcv.' + name)
File "/home/zhuyu/anaconda3/envs/mmmlab/lib/python3.9/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'mmcv._ext'

不知道问题出在哪里,请大佬指导一下

类别识别准确度不高

这个怎么设置自己的classes,总共可以预测多少类别。为什么我训练出来的,框的对象全部正确,但是类别都是乱的。

test

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

ValueError: need at least one array to concatenate

Environment

TorchVision: 0.11.3
OpenCV: 4.5.5
MMCV: 1.4.5
MMCV Compiler: GCC 9.1
MMCV CUDA Compiler: 11.1
MMRotate: 0.1.0+
CUDA Runtime 11.3
PyTorch: 1.10.2
Python: 3.7.11
GPU 0,1: GeForce RTX 3090

Error traceback
File "/home/deep/Libraries/anaconda3/envs/mmrotate/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 229, in iter
for idx in self.sampler:
File "/home/deep/Libraries/anaconda3/envs/mmrotate/lib/python3.7/site-packages/mmdet/datasets/samplers/group_sampler.py", line 36, in iter
indices = np.concatenate(indices)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate

error installing mmcv-full

firstly I installed the project with the guide of install.md, then got the pakage mmcv not mmcv-full
so I uninstalled it and try to install mmcv-full. Then the error occured:(part of it)

./mmcv/ops/csrc/pytorch/cpu/sparse_indice.cpp:16:42: fatal error: utils/spconv/spconv/geometry.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /project/Anaconda3/envs/openmmlab/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-uqqt6l7w/mmcv-full_a586b020ddee4a66a2646df4a333c5ca/setup.py'"'"'; file='"'"'/tmp/pip-install-uqqt6l7w/mmcv-full_a586b020ddee4a66a2646df4a333c5ca/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-grj8h9t0/install-record.txt --single-version-externally-managed --compile --install-headers /project/Anaconda3/envs/openmmlab/include/python3.7m/mmcv-full Check the logs for full command output.

Whether the process of building this need GPU? But I build this with server that doesn't have network,where can I get the pakage?
Thanks!

dataset label's format

All label file formats have to be in txt format? like dota
x1 x2 x3 x4......?
SSDD and hrsid?

精度

这里能直接跑出论文中的SOTA结果么

你好,我在加载rot_trans_swin

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

--eval mAP error

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
python ./tools/test.py \            
  configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \
  rotated_retinanet_obb_r50_fpn_1x_dota_le90-c0097bc4.pth --eval mAP     
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
fatal: not a git repository (or any of the parent directories): .git
sys.platform: linux
Python: 3.7.10 (default, Jun  4 2021, 14:48:32) [GCC 7.5.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.2.r11.2/compiler.29618528_0
GCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
PyTorch: 1.10.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.11.1+cu113
OpenCV: 4.5.4
MMCV: 1.4.5
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.3
MMRotate: 0.1.0+

Error traceback
If applicable, paste the error trackback here.

load checkpoint from local path: rotated_retinanet_obb_r50_fpn_1x_dota_le90-c0097bc4.pth
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 10833/10833, 18.6 task/s, elapsed: 584s, ETA:     0sTraceback (most recent call last):
  File "./tools/test.py", line 238, in <module>
    main()
  File "./tools/test.py", line 230, in main
    metric = dataset.evaluate(outputs, **eval_kwargs)
  File "/home/featurize/mmrotate/mmrotate/datasets/dota.py", line 202, in evaluate
    logger=logger)
  File "/home/featurize/mmrotate/mmrotate/datasets/dota.py", line 375, in eval_map
    det_results, annotations, i)
  File "/home/featurize/mmrotate/mmrotate/datasets/dota.py", line 613, in get_cls_results
    cls_gts.append(ann['bboxes'][gt_inds, :])
TypeError: list indices must be integers or slices, not tuple

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

Clean FAQ

There is no configs/fp16 directory in mmrotate. We should update the faq.md and fix them.

GPU memory increasing, then eventually OOM

I tried to run Rotated-Faster-RCNN on dota-1.5 dataset.
I found that GPU memory is increasing throughout the training and eventually OOM.
Below image is my run with 1 GPU.

Image
Screenshot from 2022-02-28 22-32-12

ValueError: need at least one array to concatenate

执行训练:

python tools/train.py ./configs/oriented_rcnn/oriented_rcnn_r50_fpn_1x_dota_le90.py 

发生如下错误,希望可以帮助解决

Traceback (most recent call last):
  File "tools/train.py", line 182, in <module>
    main()
  File "tools/train.py", line 178, in main
    meta=meta)
File "/home/mmrotate/mmrotate/apis/train.py", line 156, in train_detector
    return _MultiProcessingDataLoaderIter(self)
  File "/home/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 827, in __init__
    self._reset(loader, first_iter=True)
  File "/home/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 857, in _reset
    self._try_put_index()
  File "/home/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1091, in _try_put_index
    index = self._next_index()
  File "/home/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index
    return next(self._sampler_iter)  # may raise StopIteration
  File "/home/anaconda3/envs/openmmlab/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
    for idx in self.sampler:
  File "/home/anaconda3/envs/openmmlab/lib/python3.7/site-packages/mmdet/datasets/samplers/group_sampler.py", line 36, in __iter__
    indices = np.concatenate(indices)
  File "<__array_function__ internals>", line 6, in concatenate
ValueError: need at least one array to concatenate

卡是2080ti

环境: cuda10.2, pytorch1.71

--show-dir error

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
(base) ➜python ./tools/test.py \
  configs/rotated_retinanet/rotated_retinanet_obb_r50_fpn_1x_dota_le90.py \
  rotated_retinanet_obb_r50_fpn_1x_dota_le90-c0097bc4.pth --show-dir work_dirs/vis

  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
fatal: not a git repository (or any of the parent directories): .git
sys.platform: linux
Python: 3.7.10 (default, Jun  4 2021, 14:48:32) [GCC 7.5.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Build cuda_11.2.r11.2/compiler.29618528_0
GCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
PyTorch: 1.10.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.11.1+cu113
OpenCV: 4.5.4
MMCV: 1.4.5
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.3
MMRotate: 0.1.0+

Error traceback
If applicable, paste the error trackback here.

[                                                  ] 0/10833, elapsed: 0s, ETA:Traceback (most recent call last):
  File "./tools/test.py", line 238, in <module>
    main()
  File "./tools/test.py", line 204, in main
    args.show_score_thr)
  File "/home/featurize/mmdetection/mmdet/apis/test.py", line 61, in single_gpu_test
    score_thr=show_score_thr)
TypeError: show_result() got an unexpected keyword argument 'mask_color'

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

eval_map()got an unexpected keyword argument"version"

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.
I train my own dataset and evaluation it but it return eval_map()got an unexpected keyword argument"version"
Reproduction

  1. What command or script did you run?
    python tool/train.py
    A placeholder for the command.

  2. Did you make any modifications on the code or config? Did you understand what you have modified?

  3. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
in dota.py
Comment line 205 : version=self.version

convex_iou

def convex_iou(pointsets, polygons):
pointsets (torch.Tensor): It has shape (N, 18), indicating (x1, y1, x2, y2, ..., x9, y9) for each row.

The output of 'rbbox_overlaps' is abnormal when bbox is too small

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug

I use mmrotate.core.bbox.rbbox_overlaps in my own model. In the early stages of training, the network output is not stable, so many very small detection boxes are generated. When calcuate iou between small predict bboxes and gt bboxes, the output of 'rbbox_overlap' is abnormal.

Reproduction

I write a piece of code to reproduce the problem

import torch

from mmrotate.core.bbox import rbbox_overlaps

predict = [[903.34, 1034.4, 1.81e-7, 1e-7, -0.312]]
gt = [[2.1525e+02, 7.5750e+01, 3.3204e+01, 1.2649e+01, 3.2175e-01],
      [3.0013e+02, 7.7144e+02, 4.9222e+02, 3.1368e+02, -1.3978e+00],
      [8.4887e+02, 6.9989e+02, 4.6854e+02, 3.0743e+02, -1.4008e+00],
      [8.5250e+02, 7.0250e+02, 7.6181e+02, 3.8200e+02, -1.3984e+00]]

predict_tensor = torch.tensor(predict, device='cuda')
gt_tensor = torch.tensor(gt, device='cuda')

iou = rbbox_overlaps(predict_tensor, gt_tensor)
print(iou)

the output of the code is:

tensor([[  371957.7500,  9881571.0000,           inf, -9312366.0000]],
       device='cuda:0')

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

train mydataset, labeltxt looks like this, acc is always 100

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. I have read the FAQ documentation but cannot get the expected help.
  3. The bug has not been fixed in the latest version.

Describe the bug
A clear and concise description of what the bug is.

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmrotate/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
image

maybe you can add the copy paste method in the data augmentation

Describe the feature

Motivation
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when [....].
Ex2. There is a recent paper [....], which is very helpful for [....].

Related resources
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.

Additional context
Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.

AttributeError: 'RoIAlignRotated' object has no attribute 'out_size'

  1. shell:
    python demo/image_demo.py
    demo/demo.jpg
    configs/oriented_rcnn/oriented_rcnn_r50_fpn_1x_dota_le90.py
    configs/oriented_rcnn/oriented_rcnn_r50_fpn_1x_dota_le90-6d2b2ce0.pth
    demo/vis.jpg

  2. env:
    cuda 10.2
    torch 1.10.2
    mmcv 1.4.6
    mmcv-full 1.4.6
    mmdet 2.22.0

3.bug:

AttributeError: 'RoIAlignRotated' object has no attribute 'out_size'

warnings when I run the demo

when I run the demo,two warnings appear
UserWarning: DeprecationWarning: num_anchors is deprecated, for consistency or also use num_base_priors instead
warnings.warn('DeprecationWarning: num_anchors is deprecated, '
UserWarning: DeprecationWarning: anchor_generator is deprecated, please use "prior_generator" instead warnings.warn('DeprecationWarning: anchor_generator is deprecated, '
I hope to be helped

你好,我在加载roi_trans_swin_tiny_fpn_1x_dota_le90-ddeee9ae.pth出现错误

2022-03-03 17:02:09,693 - mmrotate - INFO - Set random seed to 882269402, deterministic: False
/home/lsj/anaconda3/envs/JSRMMrotate/lib/python3.7/site-packages/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: num_anchors is deprecated, for consistency or also use num_base_priors instead
warnings.warn('DeprecationWarning: num_anchors is deprecated, '
2022-03-03 17:02:10,175 - mmdet - INFO - load checkpoint from local path: work_dirs/pretrain/roi_trans_swin_tiny_fpn_1x_dota_le90-ddeee9ae.pth
Traceback (most recent call last):
File "/home/t4yp/jsr/mmrotate/tools/train.py", line 182, in
main()
File "/home/t4yp/jsr/mmrotate/tools/train.py", line 156, in main
model.init_weights()
File "/home/lsj/anaconda3/envs/JSRMMrotate/lib/python3.7/site-packages/mmcv/runner/base_module.py", line 116, in init_weights
m.init_weights()
File "/home/lsj/anaconda3/envs/JSRMMrotate/lib/python3.7/site-packages/mmdet/models/backbones/swin.py", line 727, in init_weights
table_current = self.state_dict()[table_key]
KeyError: 'backbone.stages.0.blocks.0.attn.w_msa.relative_position_bias_table'

在使用配置文件roi_trans_swin_tiny_fpn_1x_dota_le90.py和model_zoo提供的对应的预训练模型时出错。
修改成不用预训练模型时可以正常训练。

SASM训练过程出现的问题

为什么input和target维度不一样呢,然后会报这个错误assert input.size(0) == target.size(0) AssertionError
image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.