Giter Site home page Giter Site logo

zhunzhong07 / person-re-ranking Goto Github PK

View Code? Open in Web Editor NEW
599.0 23.0 176.0 8.2 MB

Person Re-ranking (CVPR 2017)

CMake 1.14% Makefile 0.27% HTML 0.08% CSS 0.10% Jupyter Notebook 56.79% C++ 32.28% Shell 0.32% Python 4.03% Cuda 2.25% MATLAB 2.20% M 0.01% C 0.51% Dockerfile 0.03%
person re-identification re-ranking re-id

person-re-ranking's Introduction

Re-ranking Person Re-identification with k-reciprocal Encoding

================================================================

This code has the source code for the paper "Re-ranking Person Re-identification with k-reciprocal Encoding". Including:

  1. IDE baseline
  2. Re-ranking code
  3. CUHK03 new training/testing protocol

If you find this code useful in your research, please consider citing:

@article{zhong2017re,
  title={Re-ranking Person Re-identification with k-reciprocal Encoding},
  author={Zhong, Zhun and Zheng, Liang and Cao, Donglin and Li, Shaozi},
  booktitle={CVPR},
  year={2017}
}

The neighbor encoding method of our paper is inspired by the reference [2]. If you use the re-ranking code in your paper, please also cite:

@article{bai2016sparse,
  title={Sparse contextual activation for efficient visual re-ranking},
  author={Bai, Song and Bai, Xiang},
  journal={IEEE Transactions on Image Processing},
  year={2016},
  publisher={IEEE}
}

================================================================

Two python version of re-ranking fcuntions are added in the 'python' folder.

  1. re_ranking_feature.py: re-ranking with original feature, Euclidean distance is used. Thanks Hao Luo !
  2. re_ranking_ranklist: re-ranking with given distance matrices, handle the difference of / division between python 2 and 3. Thanks huang houjing !

Pytorch re-implementation

[Baseline + CamStyle + Random Erasing + Re-ranking]

[Person_reID_baseline + Random Erasing + Re-ranking]

================================================================

The new training/testing protocol for CUHK03

[Dataset and state-of-the-art]

================================================================

IDE Baseline + Re-ranking

Requirements: Caffe

Requirements for Caffe and matcaffe (see: Caffe installation instructions)

Installation

  1. Build Caffe and matcaffe

    cd $Re-ranking_ROOT/caffe
    # Now follow the Caffe installation instructions here:
    # http://caffe.berkeleyvision.org/installation.html
    make -j8 && make matcaffe
  2. Download pre-computed imagenet models, Market-1501 dataset and CUHK03 dataset

Please download the pre-trained imagenet models and put it in the "data/imagenet_models" folder.
Please download Market-1501 dataset and unzip it in the "evaluation/data/Market-1501" folder. 
Please download CUHK03 dataset and unzip it in the "evaluation/data/CUHK03" folder.

Training and testing IDE model

  1. Training
cd $Re-ranking_ROOT
# train IDE ResNet_50 for Market-1501
./experiments/Market-1501/train_IDE_ResNet_50.sh

# train IDE ResNet_50 for CUHK03
./experiments/CUHK03/train_IDE_ResNet_50_labeled.sh
./experiments/CUHK03/train_IDE_ResNet_50_detected.sh
  1. Feature Extraction
cd $Re-ranking_ROOT/evaluation
# extract feature for Market-1501
matlab Market_1501_extract_feature.m

# extract feature for CUHK03
matlab CUHK03_extract_feature.m
  1. Evaluation
# evaluation for Market-1501
matlab Market_1501_evaluation.m
  
# evaluation for CUHK03
matlab CUHK03_evaluation.m

Results

You can download our pre-trained IDE models and IDE features, and put them in the "output" and "evaluation/feat" folder, respectively.

Using the above IDE models and IDE features, you can reproduce the results with our re-ranking method as follows:

  • Market-1501
Methods   Rank@1 mAP
IDE_ResNet_50 + Euclidean 78.92% 55.03%
IDE_ResNet_50 + Euclidean + re-ranking 81.44% 70.39%
IDE_ResNet_50 + XQDA 77.58% 56.06%
IDE_ResNet_50 + XQDA + re-ranking 80.70% 69.98%

For Market-1501, these results are better than those reported in our paper, since we add a dropout = 0.5 layer after pool5.

  • CUHK03 under the new training/testing protocol
Labeled Labeled detected detected
Methods Rank@1 mAP Rank@1 mAP
BOW + XQDA [1] 7.93% 7.29% 6.36% 6.39%
BOW + XQDA + re-ranking 8.93% 9.94% 8.29% 8.81%
LOMO + XQDA [3] 14.8% 13.6% 12.8% 11.5%
LOMO + XQDA + re-ranking 19.1% 20.8% 16.6% 17.8%
IDE_CaffeNet + Euclidean 15.6% 14.9% 15.1% 14.2%
IDE_CaffeNet + Euclidean + re-ranking 19.1% 21.3% 19.3% 20.6%
IDE_CaffeNet + XQDA 21.9% 20.0% 21.1% 19.0%
IDE_CaffeNet + XQDA + re-ranking 25.9% 27.8% 26.4% 26.9%
IDE_ResNet_50 + Euclidean 22.2% 21.0% 21.3% 19.7%
IDE_ResNet_50 + Euclidean + re-ranking 26.6% 28.9% 24.9% 27.3%
IDE_ResNet_50 + XQDA 32.0% 29.6% 31.1% 28.2%
IDE_ResNet_50 + XQDA + re-ranking 38.1% 40.3% 34.7% 37.4%

References

[1] Scalable Person Re-identification: A Benchmark. Zheng, Liang and Shen, Liyue and Tian, Lu and Wang, Shengjin and Wang, Jingdong and Tian, Qi. In ICCV 2015.

[2] Sparse contextual activation for efficient visual re-ranking. Bai, Song and Bai, Xiang. IEEE Transactions on Image Processing. 2016

[3] Person re-identification by local maximal occurrence representation and metric learning. Liao S, Hu Y, Zhu X, et al. In CVPR. 2015

Contact us

If you have any questions about this code, please do not hesitate to contact us.

Zhun Zhong

Liang Zheng

person-re-ranking's People

Contributors

layumi avatar zhunzhong07 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

person-re-ranking's Issues

Where does 'cuhk03_multishot_config_detected.mat' come from?

I can see in person-re-ranking/evaluation/data/CUHK03/save_img.m, you load 2 files like:
cuhk03_multishot_config_detected.mat
cuhk03_multishot_config_labeled.mat
I wonder where does these 2 files come from because I cannot find any other relevant code to create or describe them.
Thanks!

Question about the usage of q_q_dist

Hello! your reranking is a good job! However, there is a problem about your usage of q_q_dist. Without q_q_dist, it seems that your reranking would drop a lot. I do the experiments below

mAP of No reranking mAP of reranking with q_q_dist mAP of reranking without q_q_dist
70.3 85.7 71.9
23.9 34.7 25.5
24.0 35.2 25.5

It is so weird. Could you explain it?

关于re-ranking代码的疑问

你好,感谢作者的算法**,但我对您的算法实现有些疑问
(最开始我用的是罗浩的python版本,他告诉我完全按您的matlab版本复现的)
https://github.com/zhunzhong07/person-re-ranking/blob/master/evaluation/utils/re_ranking.m

  1. 代码第10行
    original_dist = original_dist./ repmat(max(original_dist, [], 2), 1, size(original_dist, 2));
    对距离矩阵如此处理的目的是什么?处理后,距离矩阵已经不是对称的了,并且转置后每一行的元素放缩的scale都不同

  2. 代码6行
    [~, initial_rank] = sort(original_dist, 2, 'ascend');
    对整个query和gallery进行排序,会出现查询样本的k1互近邻中出现查询集样本,这和论文并不一致
    另外我试了一下按论文**,每次只传入1个query进行reranking的方法,这样可能会影响了query expansion,但结果似乎大部分情况都会比现在稍好一些(只是太慢)

  3. 代码第31行
    V(i, k_reciprocal_expansion_index) = weight/sum(weight);
    求每个样本的reci-feature, 最终除以了负指数的和,这和论文中也不相同

  4. 代码第63行
    jaccard_dist(i, :) = bsxfun(@minus, 1, temp_min./(2 - temp_min));
    这个距离的计算,好似和论文中计算方法并不等价?

想问一下,这几个地方算法实现的思路是什么?

evaluation protocol.

Can you please tell that which protocol are you using for cuhk03 and market?

multi shot / single shot gallery?
multi / single query?

data

how do you do resize images?
you directly resize 224x224 ? not do crop? or others?
and how about your Data Augmentation

batchsize

@zhunzhong07 Hi,i want to know your batchsize, but i cannot find it. Can you tell me the value of batchsize or where i can find it. Thank you very much.

what does these data mean?

label_train = importdata('data/Market-1501/train_label.mat');
cam_train = importdata('data/Market-1501/train_cam.mat');
train_feature = importdata(['feat/Market-1501/' netname '_IDE_train.mat']);

Hello sir, I understand what the data 'feat/Market-1501/' netname '_IDE_train.mat' means.
But I cannot see the creation process of 'data/Market-1501/train_label.mat' and 'data/Market-1501/train_cam.mat'. What does the these data mean? How can I create it ? Could you give me a little explanation, sir ?
Thank you very much !

k values question

In your paper,you set k1 to 20, k2 to 6 in experiments on Market-1501 . I want to know why the values of k1,k2 are these.(you set k1 to 7, k2 to 3 in experiments on CUHK-03 is on this question.)

rank-m problem

I use your cuhk03 rank
IDE_ResNet_50 + XQDA + re-ranking | rank-1 is 38.1%

I use the code cmc the rank-1 is also 38.1%

but I use the code cmc the rank-1 is 63%

about CMC curve

Thank you very much for your outstanding work. I have reproduced your code in the past few days. The accuracy is one or two percentage points lower than the one mentioned in your paper. I also want to ask how to draw the CMC curve or PR curve. Do you have any code? Thank you very much. @zhunzhong07

Jaccard Distance values look off - over 90% are 1s

With no reranking, distance values are distributed evenly across the whole 0-1 range. But with reranking, although the accuracy improves ~2%, the distance histogram gets compressed into narrow range (0.7-0.9) with several distance values > 1. This looks incorrect. I studied the problem further and found that this is due to Jaccard distance that has over 90% values as 1s.

How I got this issue:

Use custom dataset (unable to share the dataset, 4k images, 200 identities) with reranking (k1=54, k2=6, lambda=0.3. - this gave the best overall fscr). Plot histograms of

  • original distance - looks right - distributed evenly across the whole range 0-1
  • Jaccard distance - looks way off - over 85% values are 1s.
  • final distance - looks off due to Jaccard distance. Distributed in narrow window - 0.7-0.9. Many values are > 1!

Expected behavior:

Final distance must be evenly distributed across the whole 0-1 range.

Questions:

  1. What is 2-temp_min here? Is it same as the max operation - eq. 10 in the paper?
  2. Does k1 depend on the number of images in the dataset? V is of size NxN (total num images). When k1 << N V has many 0s even after qe. Could this be why my Jaccard distances are all 1s?
  3. Is this expected of the distance distribution? If yes, why and if not, any suggestions to fix?

finetune影响loss收敛

不finetune的基础上,loss一直不收敛,各种调参都没有效果
加上ResNet_50.caffemodel进行finetune,loss直线下降,这是啥情况

query not an issue

hello sir,
My doubt is that under the file re_ranking.m what does "backward_k_neigh_index" variable signify?
does it signify the k reciprocal NN? if not could you kindly enlighten me as to what does it mean.

Thank you

How to generate the results picture

Hi, @zhunzhong07. Thanks for the great code. I found a good picture in your paper. I want to draw a picture like that to show the result. But I don't know how it was generated. Do you use the drawing software or generate it through code? Thank you for your help.
2018-06-21 19-39-14

Some question about cuhk03-labeled dataset under the new training/testing protocol

First:download cuhk-03.mat and the new training/testing protocol split
cuhk03_new_protocol_config_detected.mat cuhk03_new_protocol_config_labeled.mat, place all in the directory '/data/person-reid/cuhk03/'
Second: I use my own code to generate the bounding_box_train query bounding_box_test, and Is it OK?

The question is how to generate the bounding_box_train query bounding_box_test from the raw data cuhk-03.mat. Do all methods evaluate on the new training/testing protocol split.

about k and k1 k2

不扯英文了……
关于k k1 k2,有个疑问:
按你论文所述,R是由马氏距离的前k个邻居生成的,然后再做k/2的扩充,得到R*,然后论文定义说R*的大小为k1,而N(p,k)的大小为k2。而N(p,k)不正是马氏距离前k个邻居嘛,也就是应该有k=k2。

但是在代码中,为什么R*的生成是用k1个邻居以及k1/2个扩充,后来做Local Query Expansion的时候用的又是k2。是不是搞混了。

time

This is just a query... how much time did it take for you to complete the entire re-ranking as described in your esteemed paper?
Because I believe that a nested for loop is computationally very expensive.
Kindly advise.

Thank you

Do the junk images also participate in re-ranking?

For the Market-1501 dataset, there are a number of junk images for each query image in the gallery set, and some of the junk images are from from the same camera as the query image, so they may affect the retrieval performance if they are used for re-ranking.

I want to know:

  1. Do the junk images participate in the re-ranking process?
  2. If 1 is true, is it a widely accepted procedure in other published papers?

Request GPU version of re_rank function

Hi!
When using multi query evaluation in Market1501, this

 original_dist = np.concatenate(
      [np.concatenate([q_q_dist, q_g_dist], axis=1),
       np.concatenate([q_g_dist.T, g_g_dist], axis=1)],
      axis=0)

shows MemoryError even in a 32GB memory window PC. This line of code should run on the GPU
Kindly provide a re_rank_gpu version of this function for very large evaluation
Thank you

Issue about mAP

It seems that values of rank1 and mAP produced by state-of-the-art methods on CUHK03 are very close (some method's mAP are even higher than rank-1 accuracy). But on other datasets such as Market-1501 and DukeMTMC-reID, mAP is usually 10%~15% lower than rank-1 accuracy.
What is the reason for higher mAP on CUHK03?
I use the evaluation code to test my model trained on CUHK03, the mAP is also 10%~15% lower than rank-1 accuracy, which is inconsistent with the values reported.

Python Version Re-Ranking that Accepts Distance Matrices as Input

Hi, Zhun Zhong. I modify the python version to

  • accept distance matrices instead of raw features, so that
    • distance is not restricted to Euclidean Distance
    • distance can be computed beforehand, independent of the re-ranking logic
  • handle the difference of / division between python 2 and 3
  • replace numpy.float16 with numpy.float32 for numerical precision

It can be found here.

Thanks a lot for your re-ranking work. Thanks also to @michuanhaohao.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.