Giter Site home page Giter Site logo

anujinho / trident Goto Github PK

View Code? Open in Web Editor NEW
40.0 2.0 4.0 200.15 MB

Official repository for the paper TRIDENT: Transductive Decoupled Variational Inference for Few Shot Classification

Home Page: https://openreview.net/forum?id=bomdTc9HyL

License: MIT License

Python 89.70% Jupyter Notebook 10.30%
transductive-learning variational-inference ai few-shot-learning deep-learning

trident's Introduction

CodeBase of TRIDENT

This is the official repository for Transductive Decoupled Variational Inference For Few Shot Classification
(Anuj Singh, Hadi Jamali-Rad)

PWC PWC PWC PWC PWC PWC

Now published in the Transactions on Machine Learning Research - TMLR

Abstract

The versatility to learn from a handful of samples is the hallmark of human intelligence. Few-shot learning is an endeavour to transcend this capability down to machines. Inspired by the promise and power of probabilistic deep learning, we propose a novel variational inference network for few-shot classification (coined as TRIDENT) to decouple the representation of an image into semantic and label latent variables, and simultaneously infer them in an intertwined fashion. To induce task-awareness, as part of the inference mechanics of TRIDENT, we exploit information across both query and support images of a few-shot task using a novel built-in attention-based transductive feature extraction module (we call AttFEX). Our extensive experimental results corroborate the efficacy of TRIDENT and demonstrate that, using the simplest of backbones, it sets a new state-of-the-art in the most commonly adopted datasets miniImageNet and tieredImageNet (offering up to 4% and 5% improvements, respectively), as well as for the recent challenging cross-domain miniImagenet --> CUB scenario offering a significant margin (up to 20% improvement) beyond the best existing cross-domain baselines.

Key Idea

The proposed approach is devised to learn meaningful representations that capture two pivotal characteristics of an image by modelling them as separate latent variables: (i) zc representing semantics, and (ii) zl embodying class labels. Inferring these two latent variables simultaneously allows zl to learn meaningful distributions of class-discriminating characteristics decoupled from semantic features represented by zc. We argue that learning zl as the sole latent variable for classification results in capturing a mixture of true label and other semantic information. This in turn can lead to sub-optimal classification performance, especially in a few-shot setting where the information per class is scarce and the network has to adapt and generalize quickly. By inferring decoupled label and semantics latent variables, we inject a handcrafted inductive-bias that incorporates only relevant characteristics, and thus, ameliorates the network's classification performance.

Walkthrough

Directories containing the mentioned files/scripts and their descriptions:

  • configs: Contains train and test configs of mini and tieredImagenet for (5-way, 1 and 5-shot) settings. The params have been set to their corresponding best hyperparameter settings. For more details on what each field of the .json's mean, check their descriptions in src/trident_train.py, src/trident_test.py. Make sure to check that the paths in their respective fields are set correctly.
  • data: Contains scripts of dataloaders in loaders.py and task generators in taskers.py.
  • dataset: This is where the .tar's of all the datasets are to be extracted. (read more about this in the next section)
  • logs: This is where the .csv's of the logs generated by train/test scripts are saved. Set the path to this directory in the log_path field of .json configs.
  • models: The best models for each setting are to be kept here. These are loaded and run for the trident_test.py scripts of their corresponding settings. We obtained the best model at the 82,000-th and 67,500-th iteration for (5-way, 1-shot) mini and tieredImagenet tasks respectively, and at the 22,500-th and 48,000-th iteration for (5-way, 5-shot) mini and tieredImagenet tasks, respectively.
  • src/zoo: Contains all the model architectures in archs.py and the loss functions, inner-update function and task loaders in trident_utils.py.
  • src: Contains the train.py, test.py scripts, and the utils.py script responsible for logging and saving and .csv's, models.pt's and latents.pt's.

Data:

We use the datasets miniImagenet and tieredImagenet provided by Ren et al, 2018. "Meta-Learning for Semi-Supervised Few-Shot Classification." ICLR '18 and the dataset of CUB200-2011 from here and make use of the splits given by Chen et al..

  1. miniImagenet: Use this link to download the dataset. Extract the .tar file in the dataset directory.
  2. tieredImagenet: Use this link to download the dataset. Extract the .tar file in the dataset directory.
  3. CUB200-2011: Use this link to download the dataset. Extract the .tar file in the dataset directory in the cubirds200 directory.

Run Scripts:

First use the requirements.txt file to create an environment containing all the necessary libraries and packages. Then use these commands to run train and test scripts:

python -m src.trident_train --cnfg PATH_TO_CONFIG.JSON
python -m src.trident_test --cnfg PATH_TO_CONFIG.JSON

The trained models for all the settings and datasets have been provided here.

Analyze the logs:

Run the analyze.ipynb notebook to analyse the logs generated by running the train/test script.

Contact

Corresponding author: Anuj Singh ([email protected]; [email protected])

References

This repository utilizes and builds on top of the learn2learn software library for meta-learning research.

Citation

@article{
singh2023transductive,
title={Transductive Decoupled Variational Inference for Few-Shot Classification},
author={Anuj Rajeeva Singh and Hadi Jamali-Rad},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2023},
url={https://openreview.net/forum?id=bomdTc9HyL},
note={}
}

trident's People

Contributors

anujinho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

trident's Issues

RuntimeError: mat1 and mat2 shapes cannot be multiplied (GaussianParametrizer)

Hi @anujinho,
I am trying to reproduce your TRIDENT CCVAE model, however, I am not able to pass inputs through the learner/model.
Below is the model:

MAML(
  (module): CCVAE(
    (encoder): CEncoder(
      (net): Sequential(
        (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.2)
        (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (6): LeakyReLU(negative_slope=0.2)
        (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (8): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (9): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (10): LeakyReLU(negative_slope=0.2)
        (11): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (12): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (14): LeakyReLU(negative_slope=0.2)
        (15): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (16): Flatten(start_dim=1, end_dim=-1)
      )
    )
    (decoder): CDecoder(
      (linear): Sequential(
        (0): Linear(in_features=128, out_features=800, bias=True)
        (1): LeakyReLU(negative_slope=0.2)
      )
      (net): Sequential(
        (0): UpsamplingNearest2d(size=(10, 10), mode=nearest)
        (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (2): LeakyReLU(negative_slope=0.2)
        (3): UpsamplingNearest2d(size=(21, 21), mode=nearest)
        (4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): LeakyReLU(negative_slope=0.2)
        (6): UpsamplingNearest2d(size=(42, 42), mode=nearest)
        (7): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (8): LeakyReLU(negative_slope=0.2)
        (9): UpsamplingNearest2d(size=(84, 84), mode=nearest)
        (10): Conv2d(32, 3, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (11): Sigmoid()
      )
    )
    (classifier_vae): Classifier_VAE(
      (encoder): TADCEncoder(
        (net): Sequential(
          (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): LeakyReLU(negative_slope=0.2)
          (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
          (4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (6): LeakyReLU(negative_slope=0.2)
          (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
          (8): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (9): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (10): LeakyReLU(negative_slope=0.2)
          (11): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
          (12): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (13): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (14): LeakyReLU(negative_slope=0.2)
          (15): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        )
        (fe): Sequential(
          (0): Conv2d(1, 64, kernel_size=(110, 1), stride=(1, 1), padding=valid, bias=False)
          (1): LeakyReLU(negative_slope=0.2)
          (2): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
          (3): LeakyReLU(negative_slope=0.2)
        )
        (f_q): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
        (f_k): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
        (f_v): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
      )
      (gaussian_parametrizer): GaussianParametrizer(
        (h1): Linear(in_features=864, out_features=64, bias=True)
        (h2): Linear(in_features=864, out_features=64, bias=True)
      )
      (classifier): Sequential(
        (0): Linear(in_features=64, out_features=32, bias=True)
        (1): LeakyReLU(negative_slope=0.2)
        (2): Linear(in_features=32, out_features=10, bias=True)
      )
    )
    (gaussian_parametrizer): GaussianParametrizer(
      (h1): Linear(in_features=800, out_features=64, bias=True)
      (h2): Linear(in_features=800, out_features=64, bias=True)
    )
  )
)

And below is the minimized stack trace:

Traceback (most recent call last):
  ...
  File ".../trident.py", line 105, in _train_epoch
    eval_loss, eval_acc = inner_adapt_trident(ttask, reconst_loss, 
  File ".../utils.py", line 146, in inner_adapt_trident
    reconst_image, logits, mu_l, log_var_l, mu_s, log_var_s = learner(
 ...
  File ".../archs.py", line 812, in forward
    mu_s, log_var_s = self.gaussian_parametrizer(xs)
  ...
  File ".../archs.py", line 404, in forward
    mu = self.h1(x)
  ...
RuntimeError: mat1 and mat2 shapes cannot be multiplied (10x128 and 800x64)

Also, below are the hyperparameters:

n_ways: 5
k_shots: 1 
q_shots: 10 
meta_batch_size: 20
order: False
inner_lr: 0.0014
task_adapt: True
zl: 64
zs: 64
reconstr: std
dataset: cifarfs
wm_channels: 64
wn_channels: 32
download: False
extra: False
adapt_steps_train: 5
adapt_steps_test: 5

Basically the output shape of CEncoder and input shape of GaussianParametrizer doesn't match.
Can you please help to resolve this issue. Will look forward to hearing from you soon. Thanks!

where find infer script file?

Dear Anuj,

I am very impressed from your work and I would like to try and replicate your training to understand how your method works.
I have completed the training and testing of my own data, but when I want to reason and test an image, there is a problem. I looked at the algorithm and found that each reason needs to input multiple images, multiple prediction sets and query sets image, how can I realize the inference of a single image and obtain the prediction result of a single image?

How did you run the 100000 iterations, it seems it will take a week to train for only 1 setting

Dear Anuj,

I am very impressed from your work and I would like to try and replicate your training to understand how your method works. I was wondering how did you manage to train the models because it seems that it will take me 6-7 days to train a model using the iterations set-up on configs. did you parallelize it somehow and if yes do you have any instructions for that?

Regards,
Michalis

CUDA out of memory on an 11GB NVIDIA 2080Ti GPU.

Hi @anujinho
I am trying to reproduce your TRIDENT CCVAE model. however, I am not able to solve this problem, CUDA out of memory on an 11GB NVIDIA 2080Ti GPU.Even if I lower the meta_batch_size(i.e. 10 , 4 , 1), I can't get it to work.
Below is my running configuration, parameters and screenshots:

e.g. mini-5,1,train_conf.json

1.running configuration
python -m src.trident_train --cnfg /home/zzh/projectLists/trident/configs/mini-5,1/train_conf.json

2.train_conf.json hyperparameters:

{
"dataset": "miniimagenet",
"root": "./dataset/mini_imagenet",
"n_ways": 5,
"k_shots": 1,
"q_shots": 10,
"inner_adapt_steps_train": 5,
"inner_adapt_steps_test": 5,
"inner_lr": 0.001,
"meta_lr": 0.0001,
"meta_batch_size":20,
"iterations": 100000,
"reconstr": "std",
"wt_ce": 100,
"klwt": "False",
"rec_wt": 0.01,
"beta_l": 1,
"beta_s": 1,
"zl": 64,
"zs": 64,
"task_adapt": "True",
"experiment": "exp1",
"order": "False",
"device": "cuda:3"
}

3.Screenshot of results:

  0%|  | 0/100000 [00:00<?, ?it/s]/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/_tensor.py:1013: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /opt/conda/conda-bld/pytorch_1639180549130/work/build/aten/src/ATen/core/TensorBody.h:417.)
  return self._grad
  0%|                                                                                 | 4/100000 [00:27<193:24:54,  6.96s/it]
Traceback (most recent call last):
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/zzh/projectLists/trident/src/trident_train.py", line 103, in <module>
    evaluation_loss, evaluation_accuracy = inner_adapt_trident(
  File "/home/zzh/projectLists/trident/src/zoo/trident_utils.py", line 125, in inner_adapt_trident
    reconst_image, logits, mu_l, log_var_l, mu_s, log_var_s = learner(
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/learn2learn/algorithms/maml.py", line 107, in forward
    return self.module(*args, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/projectLists/trident/src/zoo/archs.py", line 814, in forward
    logits, mu_l, log_var_l, z_l = self.classifier_vae(x, z_s, update)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/projectLists/trident/src/zoo/archs.py", line 752, in forward
    x = self.encoder(x, update)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/projectLists/trident/src/zoo/archs.py", line 533, in forward
    x = self.net(x)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/activation.py", line 738, in forward
    return F.leaky_relu(input, self.negative_slope, self.inplace)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/functional.py", line 1475, in leaky_relu
    result = torch._C._nn.leaky_relu(input, negative_slope)

RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 3; 10.76 GiB total capacity; 9.63 GiB already allocated; 49.12 MiB free; 9.64 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

image

I also learned from the #Issues that the code in this paper does not run in a distributed manner. However, I tried many methods but could not solve it.Can you please help to resolve this issue or give valuable suggestions. Will look forward to hearing from you soon. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.