Giter Site home page Giter Site logo

nashory / delf-pytorch Goto Github PK

View Code? Open in Web Editor NEW
343.0 10.0 63.0 8.58 MB

PyTorch Implementation of "Large-Scale Image Retrieval with Attentive Deep Local Features"

License: MIT License

Jupyter Notebook 98.84% Python 1.16%
pytorch local-features image-retrieval

delf-pytorch's People

Contributors

nashory avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

delf-pytorch's Issues

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

In the step2: Feature Extraction of DeLF
After the step1, I have get the fix.pth.tar model file, the config in extractor.py show here:

MODE = 'pca'           # either "delf" or "pca"
GPU_ID = 4
IOU_THRES = 0.98
ATTN_THRES = 0.37
TOP_K = 1000
USE_PCA = False
PCA_DIMS = 40
SCALE_LIST = [0.25, 0.3535, 0.5, 0.7071, 1.0, 1.4142, 2.0]
ARCH = 'resnet50'
EXPR = 'dummy'
TARGET_LAYER = 'layer3'
MODEL_NAME = 'res18_mix_debase_2'
LOAD_FROM = '../train/repo/res18_mix_debase_1/keypoint/ckpt/fix.pth.tar'
PCA_PARAMETERS_PATH = './output/pca/{}/pca.h5'.format(MODEL_NAME)

however, I get the result in console:

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

And regardless what I type print() method to test. there is not anything return.

hyper-parameters for training keypoint stage && training result of finetune stage

@nashory hi, I noticed that, when training the keypoint stage, you set use_l2_normalized_feature = True, what's the reason of setting this parameters? And what's more, I noticed that you set target_layer = layer3 by default, have you tried target_layer = layer4? If yes, which one is better?

Another question that confused me is, when I train finetune stage directly on google-landmark-dataset-top1k, I got acc1 over 97.5, how about your result on this stage?

Thank you and wait for your answer.

extractor

hello,when i run extractor.py,it happened,what should I do?
image

How can i get the model pretrained on imagenet?

I want to extract delf for a dataset of common things like book, bottle, picture. But i found the matching performance of the offered model finetuned on the Romelandmark dataset is bad. So i want to known how to get the model pretrained on imagenet? Or do you have any suggestions for extracting delf for a dataset of common things? Looking forward to your early reply. Thank you very much!

multi-gpu training?

Thanks for the great pytorch code. It seems that this repo does not support multi-gpu training.

Hardcoded arguments for "Feature Extraction of DeLF"

Thanks for the implementation!

In the readme, examples are provided of arguments being passed into the (1) train PCA and (2) extract dimension reduced DeLF steps. However, all the values are hardcoded in extract.py. Thus, the steps in the main README are somewhat misleading.

Is there specific resoning for hardcoding the extract parameters, but passing the train parameters?

I feel that it would be quite convenient to pass all extract parameters from cli.

datasets

Hello! I am always looking at your beneficial repository.

I'm wondering what difference between the dataset used in the pretrain process and finetuning process.

One uses full and one uses clean, what is the difference between this two datasets?

CUDA out of memory when running notebook/visualize.ipynb

Hi @nashory and everyone,

I have the following error when running get_result(myfeeder, query) in the visualize.ipynb notebook. Could you show me how to fix this? I've tried to reduce workers = 1 to no avail. Thank you in advance!

RuntimeError: CUDA out of memory. Tried to allocate 374.00 MiB (GPU 0; 10.73 GiB total capacity; 9.57 GiB already allocated; 330.62 MiB free; 66.62 MiB cached)

Model file seems missing

There is no file 'archive/model/ldmk/keypoint/ckpt/fix.pth.tar' in the repo, is it missing? How can I get it?

The value of receptive field

Hi ! Thank you for your great work.
I have a little question. The value of receptive field in layer4 is 483 in your code. But I find you use resnet50 from torchvision model zoo, it is different with resnet50 from tensorflow slim. The value of receptive field in layer4 may be 427 ?

len() of a 0-d tensor

Hi, Thanks for releasing this code
I try to train the model on my dataset
but sometimes when I run the visualize notebook I get an error:

/home/ubuntu/DeLF-pytorch/helper/delf_helper.pyc in GetDelfFeatureFromSingleScale(x, model, scale, pca_mean, pca_vars, pca_matrix, pca_dims, rf, stride, padding, attn_thres, use_pca)
    283     # use attention score to select feature.
    284     indices = None
--> 285     while(indices is None or len(indices) == 0):
    286         indices = torch.gt(scaled_scores, attn_thres).nonzero().squeeze()
    287         attn_thres = attn_thres * 0.5   # use lower threshold if no indexes are found.

/usr/local/lib/python2.7/dist-packages/torch/tensor.pyc in __len__(self)
    368     def __len__(self):
    369         if self.dim() == 0:
--> 370             raise TypeError("len() of a 0-d tensor")
    371         return self.shape[0]

any idea why ?

need help! speed infer time

I test the infer time of pytorch, 7scales used 1---2s,
while the infer time of tensorflow ,7 scales just used 0.1--0.2s

how can i speed it?

question about visualize.ipynb

I try to match two pictures(with your provided trained model), but I meet something wrong like
"UserWarning: An output with one or more elements was resized since it had shape [999], which does not match the required output shape [998].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
torch.index_select(x1, 0, idx, out=xx1)"
and it cycle and cycle until
"UserWarning: An output with one or more elements was resized since it had shape [2], which does not match the required output shape [1].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:24.)
torch.index_select(x1, 0, idx, out=xx1)"
@nashory I am confused and would like your reply please

No module 'progress'

Did you not upload completely? Missing module'progress' ,I would be very grateful if you could respond.

from feeder import Feeder

when running the code, in visualize.ipynb, "from feeder import Feeder", I did't search it, I'd want to know what the feeder packet is and where can I load it? Thank you for your sharing

evaluation about benchmark set

Hi! I appreciate you(@nashory) releasing the code.
I am very curious about the way you evaluated the benchmark set.
Could you explain to me in detail?
And, if you have that evaluation code, can you release the code?

extractor

when i extractor.py in pca stage ,this problem hanppend?what should I do?
image

Pre-trained landmark problem

@nashory Thanks for sharing such a great work. I had downloaded your shared pre-trained weights on landmarks dataset but it seems there's error with the file that I cant read or extract as well.
Can you fix the link or file ?

Train Data Format

Hi, I'm trying to train DeLF on my custom dataset but I'm not able to understand how to arrange the data set. Where should i place my data and in what format should it be. Could anyone please help me.
Any help would be great.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.