Giter Site home page Giter Site logo

shahabmokari / clip4cirdemo Goto Github PK

View Code? Open in Web Editor NEW

This project forked from abaldrati/clip4cirdemo

0.0 0.0 0.0 1.36 MB

[CVPR 2022 - Demo Track] - Effective conditioned and composed image retrieval combining CLIP-based features

JavaScript 39.50% Python 3.23% HTML 1.40% SCSS 55.86%

clip4cirdemo's Introduction

CLIP4CirDemo

CLIP for Conditioned image retrieval Demo

Live Demo available here

Training code available at Repo

Table of Contents

About The Project

This is the official repository of the paper Effective conditioned and composed image retrieval combining CLIP-based features accepted for the Demo Track at CVPR 2022.

If you are interested in Conditioned and Composed image retrieval take a look at our follow-up work Conditioned and composed image retrieval combining and partially fine-tuning CLIP-based features accepted at CVPR 2022 workshop O-DRUM

Conditioned and composed image retrieval extend CBIR systems by combining a query image with an additional text that expresses the intent of the user, describing additional requests w.r.t. the visual content of the query image. This type of search is interesting for e-commerce applications, e.g. to develop interactive multimodal searches and chatbots.

In this demo, we present an interactive system based on a combiner network, trained using contrastive learning, that combines visual and textual features obtained from the OpenAI CLIP network to address conditioned CBIR. The system can be used to improve e-shop search engines. For example, considering the fashion domain it lets users search for dresses, shirts and toptees using a candidate start image and expressing some visual differences w.r.t. its visual content, e.g. asking to change color, pattern or shape.

The proposed network obtains state-of-the-art performance on the FashionIQ dataset and on the more recent CIRR dataset, showing its applicability to the fashion domain for conditioned retrieval, and to more generic content considering the more general task of composed image retrieval.

Built With

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

We strongly recommend the use of the Anaconda package manager in order to avoid dependency/reproducibility problems. A conda installation guide for linux systems can be found here

Installation

  1. Clone the repo
git clone https://github.com/ABaldrati/CLIP4CirDemo
  1. Install Python dependencies
conda create -n clip4cir -y python=3.8
conda activate clip4cir
conda install -y -c pytorch pytorch=1.7.1 torchvision=0.8.2
pip install flask==2.0.2
pip install git+https://github.com/openai/CLIP.git
  1. Download FashionIQ and CIRR datasets

Usage

Here's a brief description of each and every file and folder in the repo:

  • utils.py: Utils file
  • model.py: Combiner model definition file
  • data_utils.py: Dataset loading and preprocessing utils file
  • extract_features.py: Feature extraction file
  • hubconf.py: Torch Hub config file
  • app.py: Flask server file
  • static: Flask static files folder
  • templates: Flask templates folder

Data Preparation

To properly work with the codebase FashionIQ and CIRR datasets should have the following structure:

project_base_path
└───  fashionIQ_dataset
      └─── captions
            | cap.dress.test.json
            | cap.dress.train.json
            | cap.dress.val.json
            | ...
            
      └───  images
            | B00006M009.jpg
            | B00006M00B.jpg
            | B00006M6IH.jpg
            | ...
            
      └─── image_splits
            | split.dress.test.json
            | split.dress.train.json
            | split.dress.val.json
            | ...

└───  cirr_dataset       
       └─── dev
            | dev-0-0-img0.png
            | dev-0-0-img1.png
            | dev-0-1-img0.png
            | ...
       
       └─── test1
            | test1-0-0-img0.png
            | test1-0-0-img1.png
            | test1-0-1-img0.png 
            | ...
       
       └─── cirr
            └─── captions
                | cap.rc2.test1.json
                | cap.rc2.train.json
                | cap.rc2.val.json
                
            └─── image_splits
                | split.rc2.test1.json
                | split.rc2.train.json
                | split.rc2.val.json

Feature Extraction

Before launching the demo it is necessary to extract the features using the following command

python extract_features.py

Run the Demo

Start the server and run the demo using the following command

python app.py

By default, the server run on port 5000 of localhost address: http://127.0.0.1:5000/

Demo overview

  • Initially choose the dataset you want to experience with. As the image shown by the image you can experience with CIRR dataset or FashionIQ dataset

  • Choose the reference image

  • Choose or manually insert the relative caption

  • Check out the results. By clicking on a retrieved image you can use such image as reference image in a new query

Authors

Citation

If you find this code to be useful for your research, please consider citing this DEMO paper

@inproceedings{baldrati2022effective,
  title={Effective Conditioned and Composed Image Retrieval Combining CLIP-Based Features},
  author={Baldrati, Alberto and Bertini, Marco and Uricchio, Tiberio and Del Bimbo, Alberto},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21466--21474},
  year={2022}
}

If you are interested in Conditioned and Composed image retrieval take a look at our follow-up work

@inproceedings{baldrati2022conditioned,
  title={Conditioned and Composed Image Retrieval Combining and Partially Fine-Tuning CLIP-Based Features},
  author={Baldrati, Alberto and Bertini, Marco and Uricchio, Tiberio and Del Bimbo, Alberto},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4959--4968},
  year={2022}
}

clip4cirdemo's People

Contributors

abaldrati avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.