Giter Site home page Giter Site logo

anujinho / trident Goto Github PK

View Code? Open in Web Editor NEW
40.0 2.0 4.0 200.15 MB

Official repository for the paper TRIDENT: Transductive Decoupled Variational Inference for Few Shot Classification

Home Page: https://openreview.net/forum?id=bomdTc9HyL

License: MIT License

Python 89.70% Jupyter Notebook 10.30%
transductive-learning variational-inference ai few-shot-learning deep-learning

trident's Issues

where find infer script file?

Dear Anuj,

I am very impressed from your work and I would like to try and replicate your training to understand how your method works.
I have completed the training and testing of my own data, but when I want to reason and test an image, there is a problem. I looked at the algorithm and found that each reason needs to input multiple images, multiple prediction sets and query sets image, how can I realize the inference of a single image and obtain the prediction result of a single image?

How did you run the 100000 iterations, it seems it will take a week to train for only 1 setting

Dear Anuj,

I am very impressed from your work and I would like to try and replicate your training to understand how your method works. I was wondering how did you manage to train the models because it seems that it will take me 6-7 days to train a model using the iterations set-up on configs. did you parallelize it somehow and if yes do you have any instructions for that?

Regards,
Michalis

RuntimeError: mat1 and mat2 shapes cannot be multiplied (GaussianParametrizer)

Hi @anujinho,
I am trying to reproduce your TRIDENT CCVAE model, however, I am not able to pass inputs through the learner/model.
Below is the model:

MAML(
  (module): CCVAE(
    (encoder): CEncoder(
      (net): Sequential(
        (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): LeakyReLU(negative_slope=0.2)
        (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (6): LeakyReLU(negative_slope=0.2)
        (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (8): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (9): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (10): LeakyReLU(negative_slope=0.2)
        (11): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (12): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (13): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (14): LeakyReLU(negative_slope=0.2)
        (15): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        (16): Flatten(start_dim=1, end_dim=-1)
      )
    )
    (decoder): CDecoder(
      (linear): Sequential(
        (0): Linear(in_features=128, out_features=800, bias=True)
        (1): LeakyReLU(negative_slope=0.2)
      )
      (net): Sequential(
        (0): UpsamplingNearest2d(size=(10, 10), mode=nearest)
        (1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (2): LeakyReLU(negative_slope=0.2)
        (3): UpsamplingNearest2d(size=(21, 21), mode=nearest)
        (4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): LeakyReLU(negative_slope=0.2)
        (6): UpsamplingNearest2d(size=(42, 42), mode=nearest)
        (7): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (8): LeakyReLU(negative_slope=0.2)
        (9): UpsamplingNearest2d(size=(84, 84), mode=nearest)
        (10): Conv2d(32, 3, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (11): Sigmoid()
      )
    )
    (classifier_vae): Classifier_VAE(
      (encoder): TADCEncoder(
        (net): Sequential(
          (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): LeakyReLU(negative_slope=0.2)
          (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
          (4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (5): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (6): LeakyReLU(negative_slope=0.2)
          (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
          (8): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (9): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (10): LeakyReLU(negative_slope=0.2)
          (11): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
          (12): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
          (13): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (14): LeakyReLU(negative_slope=0.2)
          (15): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        )
        (fe): Sequential(
          (0): Conv2d(1, 64, kernel_size=(110, 1), stride=(1, 1), padding=valid, bias=False)
          (1): LeakyReLU(negative_slope=0.2)
          (2): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
          (3): LeakyReLU(negative_slope=0.2)
        )
        (f_q): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
        (f_k): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
        (f_v): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1), padding=valid, bias=False)
      )
      (gaussian_parametrizer): GaussianParametrizer(
        (h1): Linear(in_features=864, out_features=64, bias=True)
        (h2): Linear(in_features=864, out_features=64, bias=True)
      )
      (classifier): Sequential(
        (0): Linear(in_features=64, out_features=32, bias=True)
        (1): LeakyReLU(negative_slope=0.2)
        (2): Linear(in_features=32, out_features=10, bias=True)
      )
    )
    (gaussian_parametrizer): GaussianParametrizer(
      (h1): Linear(in_features=800, out_features=64, bias=True)
      (h2): Linear(in_features=800, out_features=64, bias=True)
    )
  )
)

And below is the minimized stack trace:

Traceback (most recent call last):
  ...
  File ".../trident.py", line 105, in _train_epoch
    eval_loss, eval_acc = inner_adapt_trident(ttask, reconst_loss, 
  File ".../utils.py", line 146, in inner_adapt_trident
    reconst_image, logits, mu_l, log_var_l, mu_s, log_var_s = learner(
 ...
  File ".../archs.py", line 812, in forward
    mu_s, log_var_s = self.gaussian_parametrizer(xs)
  ...
  File ".../archs.py", line 404, in forward
    mu = self.h1(x)
  ...
RuntimeError: mat1 and mat2 shapes cannot be multiplied (10x128 and 800x64)

Also, below are the hyperparameters:

n_ways: 5
k_shots: 1 
q_shots: 10 
meta_batch_size: 20
order: False
inner_lr: 0.0014
task_adapt: True
zl: 64
zs: 64
reconstr: std
dataset: cifarfs
wm_channels: 64
wn_channels: 32
download: False
extra: False
adapt_steps_train: 5
adapt_steps_test: 5

Basically the output shape of CEncoder and input shape of GaussianParametrizer doesn't match.
Can you please help to resolve this issue. Will look forward to hearing from you soon. Thanks!

CUDA out of memory on an 11GB NVIDIA 2080Ti GPU.

Hi @anujinho
I am trying to reproduce your TRIDENT CCVAE model. however, I am not able to solve this problem, CUDA out of memory on an 11GB NVIDIA 2080Ti GPU.Even if I lower the meta_batch_size(i.e. 10 , 4 , 1), I can't get it to work.
Below is my running configuration, parameters and screenshots:

e.g. mini-5,1,train_conf.json

1.running configuration
python -m src.trident_train --cnfg /home/zzh/projectLists/trident/configs/mini-5,1/train_conf.json

2.train_conf.json hyperparameters:

{
"dataset": "miniimagenet",
"root": "./dataset/mini_imagenet",
"n_ways": 5,
"k_shots": 1,
"q_shots": 10,
"inner_adapt_steps_train": 5,
"inner_adapt_steps_test": 5,
"inner_lr": 0.001,
"meta_lr": 0.0001,
"meta_batch_size":20,
"iterations": 100000,
"reconstr": "std",
"wt_ce": 100,
"klwt": "False",
"rec_wt": 0.01,
"beta_l": 1,
"beta_s": 1,
"zl": 64,
"zs": 64,
"task_adapt": "True",
"experiment": "exp1",
"order": "False",
"device": "cuda:3"
}

3.Screenshot of results:

  0%|  | 0/100000 [00:00<?, ?it/s]/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/_tensor.py:1013: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at  /opt/conda/conda-bld/pytorch_1639180549130/work/build/aten/src/ATen/core/TensorBody.h:417.)
  return self._grad
  0%|                                                                                 | 4/100000 [00:27<193:24:54,  6.96s/it]
Traceback (most recent call last):
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/zzh/projectLists/trident/src/trident_train.py", line 103, in <module>
    evaluation_loss, evaluation_accuracy = inner_adapt_trident(
  File "/home/zzh/projectLists/trident/src/zoo/trident_utils.py", line 125, in inner_adapt_trident
    reconst_image, logits, mu_l, log_var_l, mu_s, log_var_s = learner(
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/learn2learn/algorithms/maml.py", line 107, in forward
    return self.module(*args, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/projectLists/trident/src/zoo/archs.py", line 814, in forward
    logits, mu_l, log_var_l, z_l = self.classifier_vae(x, z_s, update)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/projectLists/trident/src/zoo/archs.py", line 752, in forward
    x = self.encoder(x, update)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/projectLists/trident/src/zoo/archs.py", line 533, in forward
    x = self.net(x)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/modules/activation.py", line 738, in forward
    return F.leaky_relu(input, self.negative_slope, self.inplace)
  File "/home/zzh/anaconda3/envs/tip/lib/python3.9/site-packages/torch/nn/functional.py", line 1475, in leaky_relu
    result = torch._C._nn.leaky_relu(input, negative_slope)

RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 3; 10.76 GiB total capacity; 9.63 GiB already allocated; 49.12 MiB free; 9.64 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

image

I also learned from the #Issues that the code in this paper does not run in a distributed manner. However, I tried many methods but could not solve it.Can you please help to resolve this issue or give valuable suggestions. Will look forward to hearing from you soon. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.