A pytorch implementation for paper "Fast and Multilevel Semantic-Preserving Discrete Hashing" BMVC-2019
- python >= 3.5.2
- pytorch >= 0.4
- MIRFLICKR-25K
- MSCOCO: train
val
usage: fmdh_mirflickr.py [-h] [--bits BITS] [--gpu GPU] [--arch ARCH]
[--max-iter MAX_ITER] [--epochs EPOCHS]
[--batch-size BATCH_SIZE] [--topk TOPK] [--m M]
[--m_wap M_WAP] [--C C] [--num-samples NUM_SAMPLES]
[--alpha ALPHA] [--beta BETA]
[--learning-rate LEARNING_RATE]
fmdh_mirflickr
optional arguments:
-h, --help show this help message and exit
--bits BITS binary code length (default: 16,32,64)
--gpu GPU selected gpu (default: 0)
--arch ARCH model name (default: resnet152)
--max-iter MAX_ITER maximum iteration (default: 20)
--epochs EPOCHS number of epochs (default: 1)
--batch-size BATCH_SIZE
batch size (default: 64)
--topk TOPK top k (default: 100)
--m M ndcg@m (default: 100)
--m_wap M_WAP ndcg@m (default: 100)
--C C class number (default: 24)
--num-samples NUM_SAMPLES
hyper-parameter: number of samples (default: 2000)
--alpha ALPHA hyper-parameter: alpha (default: 100000)
--beta BETA hyper-parameter: beta (default: 100)
--learning-rate LEARNING_RATE
hyper-parameter: learning rate (default: 0.001)
Dataset |
Code Length |
16 bits | 32 bits | 64 bits |
MIRFLICKR-25K | 0.5517 | 0.5793 | 0.6028 |
MSCOCO | 0.5036 | 0.5370 | 0.5541 |
Dataset |
Code Length |
16 bits | 32 bits | 64 bits |
MIRFLICKR-25K | 2.501 | 2.739 | 2.760 |
MSCOCO | 1.501 | 1.618 | 1.633 |
Dataset |
Code Length |
16 bits | 32 bits | 64 bits |
MIRFLICKR-25K | 2.536 | 2.723 | 2.722 |
MSCOCO | 1.530 | 1.697 | 1.708 |