Giter Site home page Giter Site logo

bhrl's People

Contributors

hero-y avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

bhrl's Issues

onnx conversion error

Use your pytorch2onnx.py to convert onnx and the error is as follows:
File "/media/xin/work1/github_pro/BHRL/mmdet/models/detectors/base.py", line 168, in forward
return self.onnx_export(img[0], img_metas[0])
File "/media/xin/work1/github_pro/BHRL/mmdet/models/detectors/two_stage.py", line 194, in onnx_export
x = self.extract_feat(img)
File "/media/xin/work1/github_pro/BHRL/mmdet/models/detectors/bhrl.py", line 49, in extract_feat
rf_feat = img[1]
IndexError: index 1 is out of bounds for dimension 0 with size 1
Please give me an answer, thank you

Details on the ref_ann_file

Hi @hero-y

Could you please explain the use/significance of this ref_ann_file file? How may this be created for different datasets and how to tweak it for different settings?

Segmentation Masks

Hi,

For COCO training, gt_masks (segmentation maps) that are in COCO annotation files are kept for the training pipeline, as in the figure below.
BHRL/configs/coco/split3/BHRL.py

Screenshot from 2023-05-01 18-22-02

But, there is no segmentation information for the VOC dataset, and segmentation masks are not kept in the VOC training pipeline. For the COCO training, is there another branch for segmentation at the RCNN block? How are the segmentation maps used for the COCO training?

I would be glad if you help me with these.

Visual result

Thank you for sharing your great work. When I operate according to your IMREAD, I can only save pkl related files, but I want to visualize my results, how should I do it? Meanwhile, during the test, can I directly visualize the results by passing in a query image and a target image, thank you

Class predictions

Hello, thank you for the code! I am trying to understand the model outputs by running the test code on the Pascal VOC dataset.
The model outputs the "result" variable in line 35 in the tools/test.py as below:
result = model(return_loss=False, rescale=True, **data)
Here, the model outputs an array dimension of 5: the BBOX coordinates, and classification scores. But I couldn't find the class labels predicted by the model.
Where the model outputs the predicted class labels?
I would be glad if you help me with this!

what about custom dataset training?

Hi, thanks for your work.

I wonder how can i train and test on my custom dataset. Should i create reference annotation data? Can you share the related script to generate this file?

Support for query images from other dataset

@hero-y
Can we include query images for a particular class from a different dataset? Currently, they're sourced from the same dataset (COCO or VOC). What if we wanted to include query images of the class person from PASCAL-VOC on the target images from MSCOCO?

Pretrained weigths

Hi, I am trying to evaluate the one-shot model for coco_split3, but when I use the coco_split3 model, some of the keys couldn't be loaded as in the image below.
Screenshot from 2023-03-09 21-25-57
What is the reason that the model is not loaded properly? I will be glad if you help me with this.

Different Architectures for COCO dataset and PASCAL VOC Dataset

There is a difference between the model configurations provided for PASCAL VOC and COCO datasets in BHRLRoIHead
In VOC configuration:

roi_layer=dict(
type='DeformRoIPoolPack',
output_size=7,
output_channels=256),

In COCO configuration:

roi_layer=dict(type='RoIAlign', out_size=7, sampling_ratio=0),

What is the reason for this difference between architectures?

I would be glad if you help me with this.

Test demo code!

Thanks for your good job! Could you give me a demo code about qualitative result? Thank you!

Pre-trained models

Hi, thank you for the work! Five models are given as the pre-trained models; four coco models for four splits, and a VOC model. Coco models were trained with four coco dataset splits, but I am confused about the VOC model. Was the VOC model is trained with only the VOC train set or with both the COCO and VOC train sets? I will be glad if you help me with this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.