Giter Site home page Giter Site logo

saliency-sampler's Introduction

Saliency-Sampler

This is the official PyTorch implementation of the paper Learning to Zoom: a Saliency-Based Sampling Layer for Neural Networks by:

The paper presents a saliency-based distortion layer for convolutional neural networks that helps to improve the spatial sampling of input data for a given task. For instance, for the eye-tracking task and the fine-grain classification task, the layer produces deformed images such as:

Requirements

The implementation has been tested wihth PyTorch 0.4.0 but it is likely to work on previous version of PyTorch as well.

Usage

To add a Saliency Sampler layer at the beginning of your model, you just need to define a task network and a saliency network and instantiate the model as:

task_network = resnet101()
saliency_network = saliency_network_resnet18()
task_input_size = 224
saliency_input_size = 224
model = Saliency_Sampler(task_network,saliency_network,task_input_size,saliency_input_size)

For the reader's reference, in main.py we provide an example of a ResNet-101 network trained with the ImageNet dataset. Since the images in the ImageNet dataset are not particularly high resolution, the saliency sampler improves the task network performance only marginally. However, when used with datasets with higher resolution images (such as the iNaturalist, GazeCapture and many others), the Saliency Sampler significantly boosts the performance of the task network.

Citation

If you want to cite our research, please use:

@inproceedings{recasens2018learning,
  title={Learning to Zoom: a Saliency-Based Sampling Layer for Neural Networks},
  author={Recasens, Adria and Kellnhofer, Petr and Stent, Simon and Matusik, Wojciech and Torralba, Antonio},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={51--66},
  year={2018}
}

saliency-sampler's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

saliency-sampler's Issues

Is this current?

Hi,

This looks very interesting. A very logical approach. I am hoping to use it with high resolution medical images. Since the code is 4 years old, I was wondering if you (or other people) have come up with improvements to this approach. Would you still recommend this or something else?

Thanks so much.

Is p just to control the pretraining blur?

As far as I can tell this could all be

 add_pretraining_blur = epoch <= N_pretraining
...
 output,image_output,hm = model(input_var,add_pretraining_blur) 
...
 if add_pretraining_blur: 

We found helpful to blur the resampled input image of the task
network for some epochs at the beginning of the training procedure. It forces
the saliency sampler to zoom deeper into the image in order to further magnify
small details otherwise destroyed by the consequent blur. This is beneficial even
for the final performance of the model with the blur removed.

Saliency-Sampler/main.py

Lines 193 to 196 in 0557add

if epoch>N_pretraining:
p=1
else:
p=0

output,image_output,hm = model(input_var,p)

if random.random()>p:
s = random.randint(64, 224)
x_sampled = nn.AdaptiveAvgPool2d((s,s))(x_sampled)
x_sampled = nn.Upsample(size=(self.input_size_net,self.input_size_net),mode='bilinear')(x_sampled)

What is the training details on CUB-200?

I find the train loss can not descend on CUB-200.
Could you tell me the training details ,such as learning rate on taskNet and saliencyNet, pretrained dataset,epochs and so on.

Help understanding create_grid

Hi again – I've been trying to grok the math behind create_grid, and am fairly confused. I've fiddled around with padding_size, grid_size, the kernel_size formula, and fwhm, and various but nothing seems to result in deterministic warping.

I'm doing some simple tests on 128x128 transformed mnist data, where I'm attempting to utilize the saliency map of a pretrained network. Below are the input, warped output, and then saliency map:
image

which I can get to warp with specific seeds after 10 or so iterations:

Essentially, my goal is to isolate some function with the signature:
warp_sample(hires_source, lowres_heatmap, warp_factor) -> lowres_warped
That spreads around resolution to the most salient regions. I understand that was also one of the paper's goals

Anyhow – any advice or comments on how to interpret the math would be greatly appreciated

sigma for the Gaussian Kernel

Hi, I have a short question.
The paper says you use a gaussian kernel with a sigma set to one-third of the width of the saliency map. Has the saliency map the same size as the grid size defined in the saliency sampler class?, or I should check the output size of the saliency network to set the sigma value

Thank you in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.