Giter Site home page Giter Site logo

daprh's Introduction

DAPRH : GAN-based Data Augmentation and Pseudo-Label Refinement with Holistic Features for Unsupervised Domain Adaptation Person Re-Identification

Official PyTorch implementation of GAN-based Data Augmentation and Pseudo-Label Refinement with Holistic Features for Unsupervised Domain Adaptation Person Re-Identification (Knowledge-Based Systems journal) (2023).

Updates

  • [06/2024] Official code is released.
  • [04/2024] Great news!!! Our paper is published and available in Sciencedirect. Please check in the citation section [below](## Citation)
  • [05/2023] Push Initial rep on Github and Release unofficial code, results, pretrained models
  • [02/2023] Citation, official code (when the paper is accepted)

Overview

overview

Getting Started

Installation

git clone https://github.com/ewigspace1910/DAPRH.git
conda create --name DAPRH
conda activate DAPRH 
pip install -r requirements.txt

Installation

  1. Download the object re-ID datasets Market-1501, Duke-MTMC, MSMT17 from here, then move to /datasets. The directory should look like:
DAPRH/data
├── Market-1501-v15.09.15
├── DukeMTMC-reID
├── MSMT17
|── ....
├── 4Gan #(Must be created)
└── SystheImgs #(Must be created)
  1. To prepare data for GAN, we setup into /datasets/4Gan as following:
DAPRH/data/4Gan
├── duke2market
|   ├──train
|   |   ├──dukemtmc
|   |   ├──market1501c0
|   |   ├──market1501c1
|   |   ├──market1501c2
|   |   ├──market1501c3
|   |   ├──market1501c4
|   |   ├──market1501c5
|   ├──test
|   |   ├──dukemtmc
|   |   ├──market1501c0
|   |   ├──market1501c1
|   |   ├──market1501c2
|   |   ├──market1501c3
|   |   ├──market1501c4
|   |   ├──market1501c5
├── market2duke
|   ├──...
├── ...

Training

We utilize 1 Tesla T4 GPU 16G for training. We use 256x128 sized images for Market-1501, DukeMTMC and MSMT17 in both Training-GAN and Training-Reid

  • For convenient, we utilize bash script to setup commands. You can reuse or modify them in './DAPRH/scripts'

Training-GAN

  • setup env
cd DAPRH/stargan
conda activate DAPRH
  • for duke-->market:
# Train StarGAN on custom datasets
LABEL_DIM=7
CROP_SIZE=128
IMG_SIZE=128
TRAIN_IMG_DIR="../../datasets/4GAN/duke2mark/train"
BATCHSIZE=16
Lidt=1
Lrec=10
Lgp=10
Lcls=1
python main.py --mode train --dataset RaFD --rafd_crop_size $CROP_SIZE --image_size $IMG_SIZE \
               --c_dim $LABEL_DIM --rafd_image_dir $TRAIN_IMG_DIR --batch_size $BATCHSIZE\
               --sample_dir ../../saves/Gan-duke2mark/samples \
               --log_dir ../../saves/Gan-duke2mark/logs \
               --model_save_dir ../../saves/Gan-duke2mark/models \
               --result_dir ../../saves/Gan-duke2mark/results \
               --lambda_idt $Lidt \
               --lambda_rec $Lrec \
               --lambda_gp $Lgp --lambda_cls $Lcls
  • for market-->duke:
# Train StarGAN on custom datasets
LABEL_DIM=7
CROP_SIZE=128
IMG_SIZE=128
TRAIN_IMG_DIR="../../datasets/4GAN/market2duke/train"
BATCHSIZE=16
Lidt=1
Lrec=10
Lgp=10
Lcls=1
python main.py --mode train --dataset RaFD --rafd_crop_size $CROP_SIZE --image_size $IMG_SIZE \
               --c_dim $LABEL_DIM --rafd_image_dir $TRAIN_IMG_DIR --batch_size $BATCHSIZE\
               --sample_dir ../../saves/Gan-mark2duke/samples \
               --log_dir ../../saves/Gan-mark2duke/logs \
               --model_save_dir ../../saves/Gan-mark2duke/models \
               --result_dir ../../saves/Gan-mark2duke/results \
               --lambda_idt $Lidt \
               --lambda_rec $Lrec \
               --lambda_gp $Lgp --lambda_cls $Lcls
  • After training, we can use trained-GAN models to gen synthetic datasets for reID :
# 4 duke2market
LABEL_DIM=7
CROP_SIZE=128
IMG_SIZE=128
TRAIN_IMG_DIR="../../datasets/4Gan/duke2mark/train/dukemtmc"
FAKEDIR="../../datasets/SyntheImgs/duke2mark" #!!!!
BATCHSIZE=1 #!!!!
ITER=200000
DOMAIN=0
python main.py --mode sample --dataset RaFD --rafd_crop_size $CROP_SIZE --image_size $IMG_SIZE \
               --c_dim $LABEL_DIM --rafd_image_dir $TRAIN_IMG_DIR --batch_size $BATCHSIZE\
               --sample_dir ../../saves/Gan-duke2mark/samples \
               --log_dir ../../saves/Gan-duke2mark/logs \
               --model_save_dir ../../saves/Gan-duke2mark/models \
               --result_dir ../../saves/Gan-duke2mark/results \
               --test_iters $ITER --except_domain=$DOMAIN \ #!!!!
               --pattern "{ID}_{CX}_f{RANDOM}.jpg" \ #!!!!
               --gen_dir $FAKEDIR #!!!!
#############################
# market4duke
LABEL_DIM=7
CROP_SIZE=128
IMG_SIZE=128
TRAIN_IMG_DIR="../../datasets/4Gan/mark2duke/train/dukemtmc"
FAKEDIR="../../datasets/SyntheImgs/mark2duke"
BATCHSIZE=1
ITER=200000
DOMAIN=6
python main.py --mode sample --dataset RaFD --rafd_crop_size $CROP_SIZE --image_size $IMG_SIZE \
               --c_dim $LABEL_DIM --rafd_image_dir $TRAIN_IMG_DIR --batch_size $BATCHSIZE\
               --sample_dir ../../saves/Gan-mark2duke/samples \
               --log_dir ../../saves/Gan-mark2duke/logs \
               --model_save_dir ../../saves/Gan-mark2duke/models \
               --result_dir ../../saves/Gan-mark2duke/results \
               --test_iters $ITER --except_domain=$DOMAIN \
               --pattern "{ID}_{CX}_f{RANDOM}.jpg" \
               --gen_dir $FAKEDIR

Training-ReID

  • activate env
cd DAPRH
conda activate DAPRH

Phase 1: Pretrain

  • training on lable domain with both real and fake images + DIM:
#for example
python _source_pretrain.py \
    -ds "dukemtmc" -dt "market1501" \
    -a "resnet50"  --iters 200 \
	--num-instances 16 -b 128 --margin 0.3 \
    --warmup-step 10 --lr 0.00035 --milestones 40 70  --epochs 80 --eval-step 1 \
	--logs-dir "../saves/reid/duke2market/S1/R50Mix"  \
    --data-dir "../datasets" \
    --fake-data-dir "../datasets/SystheImgs"   \
    --ratio 4 1 \
    --dim --lamda 0.05

python _source_pretrain.py \
    -dt "dukemtmc" -ds "market1501" \
    -a "resnet50" --iters 200 \
	--num-instances 16 -b 128  --margin 0.3 \
    --warmup-step 128 --lr 0.00035 --milestones 40 70  --epochs 80 --eval-step 1 \
	--logs-dir "../saves/reid/market2duke/S1/R50Mix"  \
    --data-dir "../datasets" \
    --fake-data-dir "../datasets/SystheImgs"   \
    --ratio 4 1 \
    --dim --lamda 0.05
  • Note: U can modify file ./modules/datasets/synimgs.py in line ndict to adapt stucture of path to Fake Image Folder.

Phase 2: Finetune

  • Modify the below script to excute:
###              DUKE ---> MARKET              ####
 
python _target_finetune.py \
-dt "market1501" -b 128  --num-instances 16 \
-a resnet50mulpart --epochs 26 --iters 400 --npart 2 \
--logs-dir "../saves/reid/duke2market/S2/finetune"   \
--init "../saves/reid/duke2market/S1/R50Mix/model_best.pth.tar" \
--data-dir "../datasets" \
--pho 0   --uet-al 0.8  \
--ece 0.4 --etri 0.6 --ema

###              MARKET ---> DUKE              ####

python _target_finetune.py \
-dt "dukemtmc" -b 128 --num-instances 16 \
-a resnet50mulpart --epochs 26 --iters 400  --npart 2 \
--logs-dir "../saves/reid/market2duke/S2/finetune"   \
--init "../saves/reid/market2duke/S1/R50Mix/model_best.pth.tar" \
--data-dir "../datasets" \
--gtri-weight 1 --gce-weight 1 
--pho 0.   --uet-al 0.5  \
--ece 0.4 --etri 0.6 --ema 
  • for convenience, you can reuse directly scripts in ./DAPRH/scripts

Evaluate

python test_model.py \
-dt "market1501" --data-dir "../datasets" \
-a resnet50 --features 0  -b 128 \
--resume ".../model_best.pth.tar" \
--rerank #optional

Results

fig

  • you can also download pretrained model from modelrepo

Citation

If you find this code useful for your research, please consider citing our paper

@article{PHAM2024111471,
title = {GAN-based data augmentation and pseudo-label refinement with holistic features for unsupervised domain adaptation person re-identification},
journal = {Knowledge-Based Systems},
pages = {111471},
year = {2024},
issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2024.111471},
url = {https://www.sciencedirect.com/science/article/pii/S0950705124001060},
author = {Dang H. Pham and Anh D. Nguyen and Hoa N. Nguyen},
keywords = {Unsupervised person re-identification, Unsupervised domain adaptation, GAN-based data augmentation, Pseudo-label refinement}
}

@InProceedings{cDAUET-23,
author="Nguyen, Anh D.
and Pham, Dang H.
and Nguyen, Hoa N.",
title="GAN-Based Data Augmentation and Pseudo-label Refinement for Unsupervised Domain Adaptation Person Re-identification",
booktitle="Computational Collective Intelligence",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
doi={10.1007/978-3-031-41456-5_45},
pages="591--605",
isbn="978-3-031-41456-5"
}

daprh's People

Contributors

ewigspace1910 avatar hoahoa1808 avatar

Stargazers

Trinh Quoc Nguyen avatar PhoenixFlower.HN avatar  avatar  avatar  avatar  avatar Liz Hoang avatar  avatar

Watchers

Trinh Quoc Nguyen avatar  avatar

Forkers

trinhquocnguyen

daprh's Issues

The performance of training the model: Duke-> Market1501

Good morning.

I have followed the instructions and trained the model, but the results are very different from reported results.
(I wonder if I have made any mistakes).

Duke-> Market1501
image

I am training the model from scratch one more time to make sure whether I made any mistakes.
Could you please confirm that the uploaded version is the newest one or not?
Thank you.

Error when running the command of fine tuning

Thank you for your great worK!

Do you have any ideas of this error?
target_features shape torch.Size([12936])
Traceback (most recent call last):
File "_target_finetune.py", line 167, in gen_psuedo_labels
rerank_dist = compute_jaccard_distance(cluster_features, print_flag=True, search_option=5) #for DBSCAN
File "DAPRH/DAPRH/modules/utils/faiss_rerank.py", line 57, in compute_jaccard_distance
index.add(target_features.cpu().numpy())
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/faiss/init.py", line 213, in replacement_add
n, d = x.shape
ValueError: not enough values to unpack (expected 2, got 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "_target_finetune.py", line 485, in
main()
File "_target_finetune.py", line 128, in main
main_worker(args)
File "_target_finetune.py", line 290, in main_worker
tmp_dataset, num_clusters, ce_scores, cep_scores = gen_psuedo_labels(target_dataset=dataset_target,
File "_target_finetune.py", line 170, in gen_psuedo_labels
rerank_dist = compute_jaccard_distance(cluster_features, print_flag=True, search_option=1) #for DBSCAN
File "DAPRH/DAPRH/modules/utils/faiss_rerank.py", line 45, in compute_jaccard_distance
index.add(target_features.cpu().numpy())
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/faiss/init.py", line 213, in replacement_add
n, d = x.shape
ValueError: not enough values to unpack (expected 2, got 1)

Thank you.

The bash commands to train GAN model

Good morning.
Thank you for your great work.
Can you confirm where is the path in your guide?
TRAIN_IMG_DIR="../../datasets/ReidGan/duke2mark/train"

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.