Giter Site home page Giter Site logo

cite's Introduction

Conditional Image-Text Embedding Networks

cite contains a Tensorflow implementation for our paper. If you find this code useful in your research, please consider citing:

@inproceedings{plummerCITE2018,
Author = {Bryan A. Plummer and Paige Kordas and M. Hadi Kiapour and Shuai Zheng and Robinson Piramuthu and Svetlana Lazebnik},
Title = {Conditional Image-Text Embedding Networks},
Booktitle  = {The European Conference on Computer Vision (ECCV)},
Year = {2018}
}

This code was tested on an Ubuntu 16.04 system using Tensorflow 1.2.1.

We recommend using the data/cache_cite_features.sh script from the phrase detection repository to obtain the precomputed features to use with our model. These will obtain better performance than our original paper as seen in this paper, i.e. about 72/54 localization accuracy on Flickr30K Entities and Referit, respectively. You can also find an explanation of the format of the dataset in the data_processing_example.

You can also find precomputed HGLMM features used in our work here

Training New Models

Our code contains everything required to train or test models using precomputed features. You can train a model using:

python main.py --name <name of experiment>

When it completes training it will output the localization accuracy using the best model on the testing and validation sets. Note that the above does not use the spatial features we used in our paper (needs the --spatial flag). You can see a listing and description of many tuneable parameters with:

python main.py --help

Phrase Localization Evaluation

When testing a model you need to use the same settings as used during training. For example afer training with spatial features, you would have to test using:

python main.py --test --spatial --resume runs/<experiment_name>/model_best

Many thanks to Kevin Shih and Liwei Wang for providing to their implementation of the Similarity Network that was used as the basis for this repo.

cite's People

Contributors

bryanplummer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cite's Issues

Hello, I can not correspond the phrase with its coarse category.

Hello:
I am a master student and am trying to do some jobs about this excellent job. I failed to use your plc-c code to compute the HGLMM features. I spent several days fixing it but it still said some of the words do not exist in the vocabulary. So I decided to use your uploaded text features to do the experiments. But in the ***_imfeats.h5 files, there are no features like phrase_id or phrase_type. I really do not know how to correspond the phrases their coarse category like "people, scene". Could you please tell me how do you do it?

Thank you very much for your precious time!

Could you please upload the "fastrcnn_feat.prototxt" file?

Hello, I am very sorry to bother you. I wish to run your code to extract the visual features. But I can not find a place to download the fastrcnn_feat.prototxt" file. Could you please upload this file or Could you please tell me which layer (fc7 or fc7 relu) you used in your CITE paper? Thank you very much

The data orgnization

Hi,Thank you for your such great work! I feels a little bit confused about the training data used in your code. The data orgnization you mentioned in https://github.com/BryanPlummer/cite/tree/master/data_processing_example
is in h5 form right? I don't understand the meaning of <pair identifier> in data['pair'] in the h5 file, I guess the later element in the pair means whether this phrase is the ground truth phrase of the image, beacause in your code, you said we can use the augmented phrase for training, but what the meaning of the first element in the pair? Besides, when you count the ground truth phrase of the image, it seems worry in your code:
image
you count the num of the ground truth phrase before putting the current gt phrase into list. By the way, how did you generate the augmented phrase? can you explain a little bit about that? Is the result in your paper trained with these augmented phrase?

Hello, Could you please tell me how you extract the visual features?

Hello, I extracted the visual features generated from the RPN using Faster RCNN. I wish to know whether you use the ROIPooling layer while extracting the features? Did you map the boxes to the Pool5 and pass them to a ROIPOOLING layer to get the feature (7*7) and then extract the fc7? Since I did that but it does not work. The acc is only 33%. I thought the way I extracted features is wrong. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.