Giter Site home page Giter Site logo

sicara / easy-few-shot-learning Goto Github PK

View Code? Open in Web Editor NEW
978.0 18.0 134.0 2.25 MB

Ready-to-use code and tutorial notebooks to boost your way into few-shot learning for image classification.

License: MIT License

Python 98.67% Makefile 1.33%
few-shot-learning few-shot-recognition few-shot-classifcation meta-learning machine-learning pytorch image-classification deep-learning

easy-few-shot-learning's Introduction

Easy Few-Shot Learning

Python Versions License: MIT CircleCI PyPi Downloads Last Release Github Issues

Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you if:

  • you're new to few-shot learning and want to learn;
  • or you're looking for reliable, clear and easily usable code that you can use for your projects.

Don't get lost in large repositories with hundreds of methods and no explanation on how to use them. Here, we want each line of code to be covered by a tutorial.

What's in there?

Notebooks: learn and practice

You want to learn few-shot learning and don't know where to start? Start with our tutorials.

Notebook Description Colab
First steps into few-shot image classification Basically Few-Shot Learning 101, in less than 15min. Open In Colab
Example of episodic training Use it as a starting point if you want to design a script for episodic training using EasyFSL. Open In Colab
Example of classical training Use it as a starting point if you want to design a script for classical training using EasyFSL. Open In Colab
Test with pre-extracted embeddings Most few-shot methods use a frozen backbone at test-time. With EasyFSL, you can extract all embeddings for your dataset once and for all, and then perform inference directly on embeddings. Open In Colab

Code that you can use and understand

State-Of-The-Art Few-Shot Learning methods:

With 11 built-in methods, EasyFSL is the most comprehensive open-source Few-Shot Learning library!

We also provide a FewShotClassifier class to quickstart your implementation of any few-shot classification algorithm, as well as commonly used architectures.

See the benchmarks section below for more details on the methods.

Tools for data loading:

Data loading in FSL is a bit different from standard classification because we sample batches of instances in the shape of few-shot classification tasks. No sweat! In EasyFSL you have:

  • TaskSampler: an extension of the standard PyTorch Sampler object, to sample batches in the shape of few-shot classification tasks
  • FewShotDataset: an abstract class to standardize the interface of any dataset you'd like to use
  • EasySet: a ready-to-use FewShotDataset object to handle datasets of images with a class-wise directory split
  • WrapFewShotDataset: a wrapper to transform any dataset into a FewShotDataset object
  • FeaturesDataset: a dataset to handle pre-extracted features
  • SupportSetFolder: a dataset to handle support sets stored in a directory

Scripts to reproduce our benchmarks:

  • scripts/predict_embeddings.py to extract all embeddings from a dataset with a given pre-trained backbone
  • scripts/benchmark_methods.py to evaluate a method on a test dataset using pre-extracted embeddings.

And also: some utilities that I felt I often used in my research, so I'm sharing with you.

Datasets to test your model

There are enough datasets used in Few-Shot Learning for anyone to get lost in them. They're all here, explicited, downloadable and easy-to-use, in EasyFSL.

CU-Birds

We provide a make download-cub recipe to download and extract the dataset, along with the standard (train / val / test) split along classes. Once you've downloaded the dataset, you can instantiate the Dataset objects in your code with this super complicated process:

from easyfsl.datasets import CUB

train_set = CUB(split="train", training=True)
test_set = CUB(split="test", training=False)

tieredImageNet

To use it, you need the ILSVRC2015 dataset. Once you have downloaded and extracted the dataset, ensure that its localisation on disk is consistent with the class paths specified in the specification files. Then:

from easyfsl.datasets import TieredImageNet

train_set = TieredImageNet(split="train", training=True)
test_set = TieredImageNet(split="test", training=False)

miniImageNet

Same as tieredImageNet, we provide the specification files, but you need the ILSVRC2015 dataset. Once you have it:

from easyfsl.datasets import MiniImageNet

train_set = MiniImageNet(root="where/imagenet/is", split="train", training=True)
test_set = MiniImageNet(root="where/imagenet/is", split="test", training=False)

Since miniImageNet is relatively small, you can also load it on RAM directly at instantiation simply by adding load_on_ram=True to the constructor. It takes a few minutes but it can make your training significantly faster!

Danish Fungi

I've recently started using it as a Few-Shot Learning benchmarks, and I can tell you it's a great playing field. To use it, first download the data:

# Download the original dataset (/!\ 110GB)
wget http://ptak.felk.cvut.cz/plants/DanishFungiDataset/DF20-train_val.tar.gz
# Or alternatively the images reduced to 300px (6.5Gb)
wget http://ptak.felk.cvut.cz/plants/DanishFungiDataset/DF20-300px.tar.gz
# And finally download the metadata (83Mb) to data/fungi/
wget https://public-sicara.s3.eu-central-1.amazonaws.com/easy-fsl/DF20_metadata.csv  -O data/fungi/DF20_metadata.csv

And then instantiate the dataset with the same process as always:

from easyfsl.datasets import DanishFungi

dataset = DanishFungi(root="where/fungi/is")

Note that I didn't specify a train and test set because the CSV I gave you describes the whole dataset. I recommend to use it to test models with weights trained on an other dataset (like ImageNet). But if you want to propose a train/val/test split along classes, you're welcome to contribute!

QuickStart

  1. Install the package: pip install easyfsl or simply fork the repository.

  2. Download your data.

  3. Design your training and evaluation scripts. You can use our example notebooks for episodic training or classical training.

Contribute

This project is very open to contributions! You can help in various ways:

  • raise issues
  • resolve issues already opened
  • tackle new features from the roadmap
  • fix typos, improve code quality

Benchmarks

We used EasyFSL to benchmark a dozen methods. Inference times are computed over 1000 tasks using pre-extracted features. They are only indicative. Note that the inference time for fine-tuning methods highly depends on the number of fine-tuning steps.

All methods hyperparameters are defined in this JSON file. They were selected on miniImageNet validation set. The procedure can be reproduced with make hyperparameter-search. We decided to use miniImageNet's hyperparameters for all benchmarks in order to highlight the adaptability of the different methods. Note that all methods use L2 normalization of features, except for FEAT as it harms its performance.

There are no results for Mathing and Relation Networks as the trained weights for their additional modules are unavailable.

miniImageNet & tieredImageNet

All methods use the same backbone: a custom ResNet12 using the trained parameters provided by the authors from FEAT (download: miniImageNet, tieredImageNet).

Best inductive and best transductive results for each column are shown in bold.

Method Ind / Trans miniImagenet
1-shot
miniImagenet
5-shot
tieredImagenet
1-shot
tieredImagenet
5-shot
Time
ProtoNet Inductive 63.6 80.4 60.2 77.4 6s
SimpleShot Inductive 63.6 80.5 60.2 77.4 6s
MatchingNet Inductive - - - - -
RelationNet Inductive - - - - -
Finetune Inductive 63.3 80.5 59.8 77.5 1mn33s
FEAT Inductive 64.7 80.1 61.3 76.2 3s
BD-CSPN Transductive 69.8 82.2 66.3 79.1 7s
LaplacianShot Transductive 69.8 82.3 66.2 79.2 9s
PT-MAP Transductive 76.1 84.2 71.7 80.7 39mn40s
TIM Transductive 74.3 84.2 70.7 80.7 3mn05s
Transductive Finetuning Transductive 63.0 80.6 59.1 77.5 30s

To reproduce:

  1. Download the miniImageNet and tieredImageNet weights for ResNet12 and save them under data/models/feat_resnet12_mini_imagenet.pth (resp. tiered).
  2. Extract all embeddings from the test sets of all datasets with make extract-all-features-with-resnet12.
  3. Run the evaluation scripts with make benchmark-mini-imagenet (resp. tiered).

easy-few-shot-learning's People

Contributors

alanqrwang avatar aml-hassan-abd-el-hamid avatar antoinetoubhans avatar diego91964 avatar djmoffat avatar dlfelps avatar durach avatar ebennequin avatar egarcol avatar mgmalana avatar nmz0429 avatar tnwei avatar vihangaaw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easy-few-shot-learning's Issues

ValueError : Sample Larger than population or is negative for 5 shot 2 way problem

Problem
I am new to FSL and have a simple problem in my scientific domain that I thought I would try as a learning example. I am trying to perform classical training for a 5 shot 2 way problem. When I am running the code from the tutorial notebook as it is after using EasySet to create a custom data object, I am getting the following error when I encounter the validation epoch during my training:

ValueError : Sample Larger than population or is negative

Considered solutions
I've tried changing the batch size and n_workers so far, and neither have worked

How can we help
I can't figure out what is going wrong here. I am very new to machine learning and would love to have your help in any way possible!

Finetune: "does not require grad and does not have a grad_fn"

Problem
I am trying to train backbone using classical training, and use Finetune in methods to fine-tune the model by episodic_training.ipynb. How should I implement it? I see that the episodic_training.ipynb you wrote has fixed the parameters for backbone, but when I import the pre-trained model for fintune, it does not work properly.
Another question is, how much is n_validation_tasks generally set to? Is there a standard? Because the setting of this hyperparameter will affect the result. I look forward to your answer.

convolutional_network = resnet50(num_classes=2).to(DEVICE)
convolutional_network.load_state_dict(torch.load('save_model/resnet50.pt'))
few_shot_classifier = Finetune(convolutional_network).to(DEVICE)

Support for single channel image inputs

I was trying to use the RelationNet class with a backbone that uses single channel inputs. The default compute_backbone_output_shape method in the AbstractMetaLearner class expects a 3 channel image input.

I believe the solution would be to convert this method to an instance method and allow subclasses to override it if needed.

Or, Am I missing something and there is an alternate method to achieve this?

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Problem
Hello. This is nice repo and make me easily to understand and implement FSL model to my project. But i would like to ask you How to implement and train Transductive Fine-tuning model

Since this model is classical training (if i understand correctly) so i use the same classical training tutorial and just replace PrototypicalNetworks to TransductiveFinetuning in few_shot_classifier()

But in training stage, This error is show up

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/tmp/ipykernel_16618/1151720753.py in <module>
      6 for epoch in range(n_epochs):
      7     print(f"Epoch {epoch}")
----> 8     average_loss = training_epoch(model, train_loader, train_optimizer)
      9 
     10     if epoch % validation_frequency == validation_frequency - 1:

/tmp/ipykernel_16618/3987028259.py in training_epoch(model_, data_loader, optimizer)
      7 
      8             loss = LOSS_FUNCTION(model_(images.to(DEVICE)), labels.to(DEVICE))
----> 9             loss.backward()
     10             optimizer.step()
     11 #             model_(images.to(DEVICE))

~/.local/lib/python3.7/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
    361                 create_graph=create_graph,
    362                 inputs=inputs)
--> 363         torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    364 
    365     def register_hook(self, hook):

~/.local/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    173     Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    174         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 175         allow_unreachable=True, accumulate_grad=True)  # Calls into the C++ engine to run the backward pass
    176 
    177 def grad(

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

So i comment loss.backward() and optimizer.step() line as it already have in forward function of Transductive Fine-tuning model but next problem is the loss of model in training process does not reduce and i still don't know why?

Can you provide me how to fix this error and how to implement Transductive Fine-tuning or other classical model in EasyFSL?

Thanks a lot

RuntimeError: CUDA out of memory. Tried to allocate :) (GPU 0; :) GiB total capacity; :) GiB already allocated; :) MiB free; :) GiB reserved in total by PyTorch)

Hello,

I am Running Episodic training with Matching Network and backbone ViT. I get the error CUDA out of memory.
My FSL setting is :
Nway- 9
Nshot- 10
Nquery- 10
batch size- 2
I am also resizing the my dataset rgb images with 224 by 224 .
Could anyone please help me with this issue?
One thing when I am running Classical training with Prototypical network and Efficient backbone with same setting its running fine.

Thanks in advance

classical training method evaluation concept

Hello, i'm new to few shot learning and want to make sure about classical training. When the backbone, after training, is evaluated with a picked method on new set of data, does the method get adjusted or learn from the new data?

KeyError with tensor input in MiniImageNet.__get_item__()

Problem
I'm trying to modify and run the my_first_few_shot_classifier notebook with the miniImageNet data set.
But not sure about the data (where to get it/how to store it/how to load it).

What is the recommended/correct/required form ? I have downloaded various (mini) imagenet data sets and each are quite different.

(1) All images (60k) are in a single directory (/images) with the naming format:
n0153282900000005.jpg

(2) Test images are in one test directory & train/val are in a directory p/class.

test-2015 / ILSVRC2015 / Data / DET / test
ILSVRC2012_test_00000003.JPEG

train-val-2015 / ILSVRC2015 / Data / DET / train / ILSVRC2013_train / n00007846
n00007846_71814.JPEG

(3) No images but a python dict with the (I'm guessing) RGB values of each image.

{'image_data': array([[[[200, 242, 240],
         [204, 235, 249],
         [202, 235, 250],
         ...,
         [210, 222, 238],
         [206, 219, 232],
         [218, 229, 237]],
        [[167, 171, 219],
         [172, 176, 224],
         [175, 179, 227],
         ...,
         [ 68,  74,  74],
         [ 47,  53,  53],
         [ 80,  85,  91]]]], dtype=uint8), 'class_dict': {'n01930112': [0, 1, 2, 3, 4

I got 1 from a Google drive. I got 2 from official imagenet ILSVRC2015 download page. And the last one from Kaggle.

Considered solutions
I will show what I have tried re data set number 1 (where all images are in a single directory).

base = "\images"
train_set = MiniImageNet(root=base, split="train", training=True) 
test_set = MiniImageNet(root=base, split="test", training=False)

And also tried:

train_set = MiniImageNet(
    split="train",
    root=base,
    transform=transforms.Compose(
        [
            transforms.Grayscale(num_output_channels=3),
            transforms.RandomResizedCrop(image_size),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
        ]
    ),
)

But in both cases, getting error.

Traceback (most recent call last):
  File "C:/git_repos/dcu/fsl/easy-few-shot-learning/gary-code/first-fsl-orig.py", line 172, in <module>
    ) = next(iter(test_loader))
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\torch\utils\data\dataloader.py", line 521, in __next__
    data = self._next_data()
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\torch\utils\data\dataloader.py", line 561, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\easyfsl\datasets\mini_imagenet.py", line 110, in __getitem__
    Image.open(self.data_df.image_path[item]).convert("RGB")
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\pandas\core\series.py", line 882, in __getitem__
    return self._get_value(key)
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\pandas\core\series.py", line 990, in _get_value
    **loc = self.index.get_loc(label)**
  File "C:\git_repos\dcu\fsl\easy-few-shot-learning\venv\lib\site-packages\pandas\core\indexes\range.py", line 358, in get_loc
    raise KeyError(key)
KeyError: tensor(9552)

Process finished with exit code 1

It looks like there are no labels assigned to the images.

How can we help
So the 2 questions are:
(1) What is the correct way to store the data ?
(2) How to load the data ?

Any help will be much appreciated. An example of this notebook running on mini imagenet would be most helpful.

Thanks,

Gary

EasyFSL on an image classification problem with my own dataset

I am a complete beginner in python coding and I am tasked with an image classification problem.
I have a dataset of 25000 images with 2 classes (50-50). 1 class being photos of corrosion and the other with photos of non-corrosion.

How do I change the code in my_first_few_shot_classifier.ipynb so that I can upload my own dataset from my local drive as well as split the dataset into training and test set.

Thank you!

Can I use different backbone for classical or episodic learning ?

Hi,

I am using your classical and episodic Training notebooks, they are very helpful for my project , although I want to try different backbones like EfficientNet. I am new in this so do you have any idea if I can use different backbone than ResNet if yes what changes I will have to consider in the code?

Thanks in adavance

Any way to plot out a confusion matrix after testing/evaluation?

Hi, I am quite new to the topic few-shot learning and would like to thank you for providing this insightful tutorial. I managed to run your code successfully with some slight tweaks to the dataset I use which is Mini-ImageNet.

I wanted to to plot a confusion matrix using PyTorch to make it something similar to this:
Youtube link
Blog link

Do you think it is possible to carry it out in your tutorials on Jupyter Notebook? I have trouble following their methods and implementing in classical_training.ipynb and my_first_few_shot_classifier.ipynb.

The reason I wanted to plot a confusion matrix is because I think it can help me in explaining the results a bit better.
I want to thank you for your contributions again as I felt it really helped me a lot.

Traning with custom dataset

Hi, thanks for your code, it helps me a lot.
But it also got some problems for a newbie like me.
Although I make the code run successfully now, I also make a lot of compromises to some errors.
I combined the code from classical_training.ipynb and my_first_few_shot_classifier.ipynb.

I post all my code step by step and point out the problems I met.
I am running Windows10.
The environment is created by Anaconda.
Cuda10.2, Cudnn 7.0, PyTorch 1.10.1

At last, great thanks for your code again.
Let's discuss this together.

Custom Dataset for EasyFSL

Apologies for the beginner question, but could you please provide me some information on how to create a custom dataset for EasyFSL?

I know there are various labeling tools, different output file formats, and many examples of PyTorch custom datasets. Is there a particular labeling tool, file format, and dataset creation code that would be required here?
I've used Label Studio before and exported in a JSON format, and I've seen other folks use Labelbox and the Darknet format (for YOLOv5). Your example code starts with an existing dataset, so I want to make sure I create mine in a compatible way.

Thank you for any help you can provide.

N_QUERY

  1. The number of query set of each class has to be equal? Can i use random number of images of each class.
  2. Training epoch by epoch still can be used in few-shot learning?
    306034091_661392165355129_2531247296356843848_n

How to take Inference on CPU, How to load support set?

Hey, @gabrielsicara I trained a model using GPU on google colab but was unable to take its inference on the CPU. Also how to make inference for any input single image on CPU? Like how to define a support set that the model will use for processing? I m trying to explore these on my own and will definitely do a pull request. But if you can answer those it will be helpful.
Tried different ways of loading as well. It works fine on GPU.

Screenshot 2022-01-06 at 12 23 08 AM

Screenshot 2022-01-06 at 12 24 35 AM

Kind of always get strucked here
Screenshot 2022-01-05 at 11 35 42 PM

Wrong link in README.md

Describe the bug
This is very small typo but in README.md, if you click on example of classical training, it goes to episodic notebook.

To Reproduce
Steps to reproduce the behavior:

  1. Click "Example of classical training"

Not sure if I should add pull request or raise an issue when I find small typos. Let me know :)

How to get a prediction for custom dataset ?

Hello, Thank you very much for your amazing work, its very helpful.

I have one question on getting prediction on custom dataset, so basically I am using Easyset for my custom dataset and classical training notebook and I want to see prediction/classification also , for example from which class my test image belongs to.
I hope my quetsion is clear to you.
Thanks in advance

How to train on custom data

Hi Thank you for your great work and sharing it with everyone.

I want to implement few shot learning for a task that I have. where I have collected few samples(10) each for both positive and negative class. How do I train the model on these novel classes using my custom dataset.

Thank you for your help

How to make predictions on test set ?

Hello,

Is there a way to make predtictions on test set in evaluation phase , in your notebook in episodic training and classical training you calculate the accuracy but I also want to predict the class on test set. Can you please suggest regarding this ?
Thanks in advance

Probabilities of a novel image belonging to a Class

    I created an example that might help you.

import torchvision.transforms as tt
import torch
from torchvision.datasets import ImageFolder
from easyfsl.methods import FewShotClassifier
from torch.utils.data import DataLoader

class FewShotPredictor :
    """

        This class aims to implement a predictor for a Few-shot classifier.

        The few shot classifiers need a support set that will be used for calculating the distance between the support set and the query image.

        To load the support we have used an ImageFolder Dataset, which needs to have the following structure:

        folder:
          |_ class_name_folder_1:
                 |_ image_1
                 |_  …
                 |_ image_n
          |_ class_name_folder_2:
                 |_ image_1
                 |_  …
                 |_ image_n

        The folder must contain the same number of images per class, being the total images (n_way * n_shot).

        There must be n_way folders with n_shot images per folder.

    """

    def __init__(self ,
                 classifier: FewShotClassifier,
                 device,
                 path_to_support_images,
                 n_way,
                 n_shot,
                 input_size=224):

        """
            :param classifier: created and loaded model
            :param device: device to be executed
            :param path_to_support_images: path to creating a support set
            :param n_way: number of classes
            :param n_shot: number of images on each class
            :param input_size: size of image

        """
        self.classifier = classifier
        self.device = device

        self.predict_transformation = tt.Compose([
            tt.Resize((input_size, input_size)),
            tt.ToTensor()
        ])

        self.test_ds = ImageFolder(path_to_support_images, self.predict_transformation)

        self.val_loader = DataLoader(
            self.test_ds,
            batch_size= (n_way*n_shot),
            num_workers=1,
            pin_memory=True
        )

        self.support_images, self.support_labels = next(iter(self.val_loader))



    def predict (self, tensor_normalized_image):
        """

        :param tensor_normalized_image:
        Example of normalized image:

            pil_img = PIL.Image.open(img_dir)

            torch_img = transforms.Compose([
                transforms.Resize((224, 224)),
                transforms.ToTensor()
            ])(pil_img)

            tensor_normalized_image = tt.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])(torch_img)[None]


        :return:

        Return

        predict = tensor with prediction (mean distance of query image and support set)
        torch_max [1] = predicted class index

        """

        with torch.no_grad():
           self.classifier.eval()
           self.classifier.to(self.device)
           self.classifier.process_support_set(self.support_images.to(self.device), self.support_labels.to(self.device))
           pre_predict = self.classifier(tensor_normalized_image.to(self.device))
           predict = pre_predict.detach().data
           torch_max = torch.max(predict,1)
           class_name = self.test_ds.classes[torch_max[1].item()]
           return predict, torch_max[1], class_name

#49

Originally posted by @diego91964 in #17 (comment)

Good morning guys and many thanks for the awesome and very helpful code and the effort put to achieve that
I have a question regarding novel image class prediction:
Is there a way to calculate as in 'classical' classification the percentage/probability of a novel image to belonging to each class?
Do you believe a softmax maybe at the 'return predict, torch_max[1], class_name' at the return tensor would have a meaning?

Thanks in advance

Question on meta-training in the tutorial notebook

Hi, Thanks for making such a simple and beautiful library for Few-Shot Learning. I have a query when we run a particular cell from your notebook for training meta-learning model, does it also train the ResNet18 Model on the given Dataset for generating a better representation of Feature Images like we do it while we do transfer learning when we train classifier model on our custom dataset using Imagenet pre-trained parameters or Does it only trains Prototype network?

Please, clarify this doubt.
Thanks again.

two questions about code

Hello! First of all, thank you very much for your code! I have a few questions for you.

The first question is: I want my model to accurately classify my own pictures. Must the test set and training set be mutually exclusive? Because I have fewer categories, only 6.

The second question is: how can I change your code so that I can run it myself?

Thank you sincerely!

Epochs Vs Episodes

Hi,
I've been using your library for my project and I am confused on two of your notebooks.

In [1] you mention that "To train the model, we are just going to iterate over a large number of randomly generated few-shot classification tasks, and let the fit method update our model after each task. This is called episodic training."

However, in [2], you mention that you are also using episodic training but here you add the concept of epochs to it, encapsulating several episodes in each one. Is it because you now use a validation set that you did not used in [1] or that epochs in [1] are implicit because of the 40 000 episodes default value ?

I must say that I am quite confused by this. Do I have a misunderstanding somewhere?
Thank you for clarifying that for me.

[1] https://github.com/sicara/easy-few-shot-learning/blob/master/notebooks/my_first_few_shot_classifier.ipynb
[2] https://github.com/sicara/easy-few-shot-learning/blob/master/notebooks/episodic_training.ipynb

Implementation of TADAM: Task dependent adaptive metric for improved few-shot learning

Problem
What do you want to do? What is blocking you?

I am reading an article and would like to discuss the possibility of implementing it in the prototypical network (https://arxiv.org/abs/1805.10123).

It would require the creation of a block to represent TEN layer, however the implementation of resnet12 uses a library and would have to be replaced by its own implementation.

What do you think about the proposal in this article for improving the calculation of the loss function?

Is there any timeline on MAML?

HI,

I really loved your repo and implementation. Just want to check if there any timeline on when the MAML based code will be out?

Specify tensor shapes in the docstring

Thank you for the sharing. I trained the few shot algorithm you provided on a dataset that I dispose. However, how do I do the inference on one image? I mention that my train and test datasets belong to the same classes, but with different images.

custom Data

I want to train this model on custom data but I did not understand the split for CUB and I could not even get documentation on EasySet ? do you know where it is ?
i just have 2 classes in my data btw

Adding more backbones

Hi @ebennequin, Thanks for this elegant code base, some questions(can be a feature request)

  1. Can we add new backbones like (ViT, densenet, Convnext etc...)?
  2. Building functionalities for model deployments?

For custom datasets, how to divide the class?

Hi. Thank you for your great work and sharing it with everyone.

I have a question.
For custom datasets, how to divide the class? (train, val, test) Randomly select some classes as the training set, or?
Do you have any tricks?

FSL object detection

It will be great if there was a tutorial for Few shot object detection as well.

Prototypes are computed in train mode

hello i have a question.
Is it correct that the backbone should adjust the batch norm statistics when processing the prototypes (s. https://arxiv.org/abs/1603.04779)? I get significant differences when I set the backbone to eval() in the method process_support_set() before computing the prototytpes like:

def process_support_set(
    self,
    support_images: torch.Tensor,
    support_labels: torch.Tensor,
):

    self.backbone.eval() # <- Ive added this line the rest is unchanged
    support_features = self.backbone.forward(support_images)
    self.prototypes = compute_prototypes(support_features, support_labels)

This way the backbone wont update the BN statistics. However I dont know if this is intended and some clarification would be really helpful.

Thanks in advance

Acc result

how about the 1-shot and 5-shot acc by training on mini-imagenet?

How to freeze some layers of network ?

Hi ,

I want to try out few things with freezing and relaxing some network layers, but I am not sure exactly how I can do that with few shot dual architecture.

Thanks in Advance

How to build my own train_set use own data

Problem
Thanks for your sharing about FSL, there is one problem:
When I finished the tutorial 'Discovering Prototypical Networks' , I want to use my own photo data to build test_set, how can I do that and How should I construct my data's structure

Precomputing support vectors once

I have a suggestion of improvement for your model API. I would take the support images as parameter only in the constructor and not in forward() and precompute their feature vectors once for all at model creation time.
And then in your forward method you take only query images as input.
It allows to make multiple forward calls without recomputing support feature vectors each time.
It is also more intuitive because forward usually takes as input only elements for which we want a prediction (query images here).
You can optionally add methods to update the support images (to add or remove some for example).

Papers on this project

Hello, author.
Can you tell me the corresponding paper of this source code?
thank you

AttributeError: 'SupportSetFolder' object has no attribute 'to'

Hi Diego, thank you so much for contributing to EasyFSL! This topic seems to have raised a lot of concern among users and it's very nice to have some help 😄

I agree it would be good to have a tool to easily feed a support set to FewShotClassifier.process_support_set() from a fileroot_dir, structure. It's a good idea to use ImageFolder for this.

However, I'm not sure it is relevant to integrate it into a novel FewShotPredictor class. I'm concerned that this would raise some confusion with FewShotClassifier, and I think we can find a lighter solution.

I'm thinking this could be achieved with a class SupportSetFolder that would extend ImageFolder and implement two methods get_images() -> Tensor and get_labels() -> Tensor that would respectively return images and labels in the specified device.

Then the user would ony have to do this:

device = "cuda"

support_set = SupportSetFolder(root=path/to/root, transform=my_inference_transform, device=device)

with torch.no_grad():
    my_classifier.eval()
    my_classifier.process_support_set(support_set.get_images(), support_set.get_labels())
    predictions = my_classifier(query_images.to(device))

What do you think?

Hi,
I was looking for same question , and tried your above example in my use case but I get an error
AttributeError: 'SupportSetFolder' object has no attribute 'to'

following is the code I am using

DEVICE = "cuda"

support_set = SupportSetFolder(root='C:/Users/wn00204104/PycharmProjects/Data-Preparation/easy-few-shot-learning/data/test_set', transform=transform, device=DEVICE)
with torch.no_grad():                                                                            
    few_shot_classifier.eval()                                                                   
    few_shot_classifier.process_support_set(support_set.get_images(), support_set.get_labels())  
    predictions = few_shot_classifier(query_images.to(DEVICE)).argmax(dim=1)       

Thanks in advance

Originally posted by @shraddha291996 in #49 (comment)

How to Use My First Few Shot Classifier with a Custom Dataset

Problem
When combining the "My First Few Shot Classifier" with a custom dataset from the "Easy Set", I got this error:

Exception has occurred: PicklingError
Can't pickle <function <lambda> at 0x13d8f6e50>: attribute lookup <lambda> on __main__ failed
  File "/Users/yannusinovich/Documents/ONX_AI/machine-learning/recommender/issue_logs/few_shot_learning_easyfsl.py", line 279, in <module>
    ) = next(iter(test_loader))

Considered solutions

  1. I created a sample dataset with 4 classes and 4 images per class, and I placed them in folders, as described in "Easy Set", along with a "config.json" defining the classes and where to find them.
  2. I changed the references to test_set in "My First Few Shot Classifier" to dataset since test_set wasn't defined.
  3. When I ran the code, I got the error 'EasySet' object has no attribute '_flat_character_images'.
  4. I defined self._flat_images_and_labels: List[Tuple[str, int]] = list(zip(self.images, self.labels)) in the __init__ of the EasySet class, and I changed the sampler code to:
N_WAY = 4  # Number of classes in a task
N_SHOT = 4  # Number of images per class in the support set
N_QUERY = 2  # Number of images per class in the query set
N_EVALUATION_TASKS = 10

dataset.get_labels = lambda: [
    instance[1] for instance in dataset._flat_images_and_labels
]

The value of self._flat_images_and_labels for EasySet looks exaclty the same as Omniglot's _flat_character_images, but it runs into the error above at this step:

(
    example_support_images,
    example_support_labels,
    example_query_images,
    example_query_labels,
    example_class_ids,
) = next(iter(test_loader))

How can we help
Could you please provide me some guidance on how to use the "My First Few Shot Classifier" notebook with a custom dataset from the "Easy Set"?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.