Giter Site home page Giter Site logo

rk-net's People

Contributors

aggman96 avatar layumi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

layumi

rk-net's Issues

Keypoint Visualization

Dear Authors,

Thank you for releasing the code for your work, it is really helpful.
Can you please help me with keypoint visualization done after Stage 2 from the feature map generated by USAM.
The feature map generated for one image (3x224x224) is of the size 256x56x56. So, in order to extract keypoints using threshold, were these feature maps converted to 2D shape let's say by taking mean and then min-max normalisation and thresholding was applied?
Further, how were these keypoints mapped back to original input image since its size differ from that of the feature map.

Eagerly waiting for your response, really appreciate your help.

Training files for CVUSA and CVACT datasets

Hello, I recently read your paper and saw the comparison results of CVUSA and CVACT datasets in the paper. I would like to ask if the files for training these two datasets are the same as those for training University-1652?

No such file or directory: './model/RK-Net'

thanks for your sharing project, but some problems happen to me when i try to reproduce experiment results in your paper , as following shows
File "train.py", line 364, in
os.mkdir(dir_name)
FileNotFoundError: [Errno 2] No such file or directory: './model/RK-Net'

RK-Net/train.py

Line 361 in 14356da

dir_name = os.path.join('./model',name)

could you please tell what should i do to solve this probelm?
thanks again!

Queries regarding implementation

Hi Authors,
Thank you for sharing the GitHub code for this, can you please help me with below:

  1. The train.py loads input and labels from the dataset, what are the labels/annotations here? Are they keypoint annotations?
  2. In section III F of the paper, it is mentioned that loss regards each location as an individual class (y), what is meant by location here? Also, the number of locations are fixed for each image?
  3. Can USAM generate keypoint descriptors as well? Fig 5 (paper) shows matching of keypoints between two images, can you please help us on how it was done?
  4. The code doesn't seem to have code for visualisation of keypoints at different epochs and during testing, can you please share that as well?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.