Giter Site home page Giter Site logo

pierre-jacob / iccv2019-horde Goto Github PK

View Code? Open in Web Editor NEW
83.0 83.0 10.0 109 KB

Code repository for our paper entilted "Metric Learning with HORDE: High-Order Regularizer for Deep Embeddings" accepted at ICCV 2019.

License: MIT License

Python 99.70% Shell 0.30%

iccv2019-horde's People

Contributors

pierre-jacob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iccv2019-horde's Issues

bunch of questions

I appreciate your quick response.
I successfully finished the training and got O1 score arount 66.0 for CUB dataset.
I think it makes sense for now.

I actually came up with some other curiosities during the process.

  1. Can I change backbone with pretrained model?
  • I am now suceesfully training the Horde model with Resnet backbone, but not sure if it is the correct way. (I am using tf.keras.application.ResNet50 model and load imagenet pretrained weight as backbone.)
  • Is it requiring specified trained model for loading? -> I am now suceesfully training the Horde model with Resnet backbone, but not sure if it is the correct way.
  1. After training, I am planing to add custom dataset class and train & test with it.
    I am wondering if there exists testing script that I can refer as well. If not, will it be something like using mdl.predict after loading Horde model?
  • also, when implementing custom dataset, could you tell me if there is certain standard to set options like number of high order moments, images per class, classes per batch...etc?
  1. I can see that the code is splitting train and test set using their labels.
    As I have understood, for example, in MNIST, the model is using 0 to 4 labeled images to train and using 5 to 9 labeled images for testing. If it is the case, how they can measure performance?
    Is it the process like making 'gallery' with the certain amount of test dataset and making 'probe' with rest amount of test dataset and get R@K of them?

Sorry for lots of question at a same time.
Again, thank you so much for the advice you have given me.

Training process

Hello,

Thanks for a great sources.
I am trying to test the code with CUB dataset now.

I succeeded to run training without error, but now I am not very sure about the training process.
What I think is, using pretrained backbone and cascade Horde layer, training has been going on.

But I can see R@K for each output fluctuates along with the epochs.
I am wondering if it is because of pretrained backbone model or just normal case of Horde.

Thank you.

How to rebuild the result of CARS196

  Thanks for releasing the code of your work. I have tested on CARS196, but I cannot achieve the performance of your public paper. The highest Recall@1 is 82.4%. I think my setting is wrong so that I cannot achieve the similar performance of your paper. Could you share the setting on CARS196?

The setting is:
python train.py --dataset CARS --feature BNInception --embedding 512 512 512 512 512 --ho-dim 8192 8192 8192 8192 --use_horde --trainable --cascaded
config.json:
{
"n_classes_per_batch": 5,
"images_per_class": 8,
"steps_per_epoch": 200,

"test_batch_size": 100,

"train_lr": 1e-5,
"train_epoch": 80,

"workers": 8,
"max_queue_size" : 32,

"train_im_size": [256, 256],
"test_im_size": [256, 256],

"multi_res_min_ratio": 0.8,
"multi_res_max_ratio": 1.8,

"prob_keep_ratio": 1.0,
"proba_multi_res": 1.0,
"proba_random_crop": 1.0,
"proba_horizontal_flip": 0.5,

"resume_training" : null,
"compute_scores_freq" : 1
}

And the setting of my PC is Ubuntu18.04, CUDA10.0 with GTX1080ti, Tensorflow 1.14.

Other evaluation metrics

Hi,
I like your paper. As for the evaluation metrics, I can see your are using recall@k to track the performance. May I ask have you tried to implement other metrics like NMI and MAP@R? Thanks!

How to rebuild the results of INSHOP dataset

So this is the same question (link below) but for the INSHOP dataset
#4

What do I need to change get your 90%~ recall@1?
With the settings from run_cub.sh I get around 60% recall@1.

Thanks in advance!

Ping! @pierre-jacob who kindly answered the last issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.