Giter Site home page Giter Site logo

pmh9960 / icolorit Goto Github PK

View Code? Open in Web Editor NEW
62.0 3.0 13.0 8.76 MB

Official PyTorch implementation of "iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer." (WACV 2023)

License: MIT License

Python 99.14% Shell 0.86%
colorization interactive vision-transformer interactive-colorization point-interaction

icolorit's Introduction

iColoriT (WACV 2023) Official Implementation

This is the official PyTorch implementation of the paper: iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer.

iColoriT is pronounced as "I color it ".

PWC
PWC
PWC

iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer
Jooyeol Yun*, Sanghyeon Lee*, Minho Park*, and Jaegul Choo
KAIST
In WACV 2023. (* indicate equal contribution)

Paper: https://arxiv.org/abs/2207.06831
Project page: https://pmh9960.github.io/research/iColoriT/

Abstract: Point-interactive image colorization aims to colorize grayscale images when a user provides the colors for specific locations. It is essential for point-interactive colorization methods to appropriately propagate user-provided colors (i.e., user hints) in the entire image to obtain a reasonably colorized image with minimal user effort. However, existing approaches often produce partially colorized results due to the inefficient design of stacking convolutional layers to propagate hints to distant relevant regions. To address this problem, we present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions, leveraging the global receptive field of Transformers. The self-attention mechanism of Transformers enables iColoriT to selectively colorize relevant regions with only a few local hints. Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture. Also, in order to mitigate the artifacts caused by pixel shuffling with large upsampling ratios, we present the local stabilizing layer. Extensive quantitative and qualitative results demonstrate that our approach highly outperforms existing methods for point-interactive colorization, producing accurately colorized images with a user's minimal effort.

Demo ๐ŸŽจ

Try colorizing images yourself with the demo software!

Pretrained iColoriT

Checkpoints for iColoriT models are available in the links below.

Backbone Link
iColoriT ViT-B iColoriT (Google Drive)
iColoriT-S ViT-S iColoriT-S (Google Drive)
iColoriT-T ViT-Ti iColoriT-T (Google Drive)

Testing

Installation

Our code is implemented in Python 3.8, torch>=1.8.2

git clone https://github.com/pmh9960/iColoriT.git
pip install -r requirements.txt

Testing iColoriT

You can generate colorization results when iColoriT is provided with randomly selected groundtruth hints from color images. Please fill in the path to the model checkpoints and validation directories in the scripts/infer.sh file.

bash scripts/infer.sh

Then, you can evaluate the results by running

bash scripts/eval.sh

Randomly sampled hints used in our paper is available in this link

The codes used for randomly sampling hint locations can be seen in hint_generator.py

Training

First prepare an official ImageNet dataset with the following structure.

train
 โ”” id1
   โ”” image1.JPEG
   โ”” image2.JPEG
   โ”” ...
 โ”” id2
   โ”” image1.JPEG
   โ”” image2.JPEG
   โ”” ...     

Please fill in the train/evaluation directories in the scripts/train.sh file and execute

bash scripts/train.sh

Citation

@InProceedings{Yun_2023_WACV,
    author    = {Yun, Jooyeol and Lee, Sanghyeon and Park, Minho and Choo, Jaegul},
    title     = {iColoriT: Towards Propagating Local Hints to the Right Region in Interactive Colorization by Leveraging Vision Transformer},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {1787-1796}
}

icolorit's People

Contributors

pmh9960 avatar yeolj00 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

icolorit's Issues

Does this model support colorization on multi-resolution images?

Thanks for your work!
In the training, validation, and inferring stages, it seems the images are firstly resized to (224, 224) and then the PSNR is calculated.
I want to conduct colorization on multi-resolution images, but the resize operation may degrade the PSNR value on the original resolution, which is bigger than (224, 224).
So could you provide some suggestions to modify your code for addressing this problem?
Thank you!

the difference between DataTransformationFixedHint function and DataTransformationFixedHintContinuousCoords function

During the validation, inference, and testing phases, we can apply the DataTransformationFixedHint function to specify fixed coordinates. However, I've noticed another function named DataTransformationFixedHintContinuousCoords. Judging from its name, it appears to be designed for continuous coordinates.

As the released code uses the RandomHintGenerator function during the training phase to generate random hints, it's reasonable to assume that the same trained model should be capable of handling both sparse and continuous hints at the same time. If my understanding is incorrect, I would greatly appreciate your clarification.

Then, I have a couple of questions:

(1) Could you kindly explain the distinction between DataTransformationFixedHint and DataTransformationFixedHintContinuousCoords? I'm curious about the need for a specific function for continuous hints.

(2) The primary difference between the two functions seems to be an additional line of code in the call function:
hint_coords = [hint_coords[0][:idx] for idx in range(len(hint_coords[0]) + 1)]
As a result, the coordinates text file might have a different format compared to that of DataTransformationFixedHint. Would you be able to clarify the specific format that DataTransformationFixedHintContinuousCoords function requires? An illustrative example would be immensely helpful.

(3) Will the trained model based on randomly generated hints perform differently on sparse hints versus continuous hints?

I genuinely appreciate your assistance and insights. Thank you in advance for your kind response!
Best Regards
HONGJIN

unrecognized arguments: --local-rank=0

hello, i'm trying to re-train the model, but i have this problem when I use the train.sh
iColoriT training scripts: error: unrecognized arguments: --local-rank=0

i am not passing any new argument, and as I found it is local_rank not local-rank.

Do you have any hint for this issue?

pretrained_cfg error

I could not run the demo because the following error
Traceback (most recent call last): File "D:\Abdulaziz\iColoriT\iColoriT\iColoriT_demo\icolorit_ui.py", line 63, in <module> model = get_model(args) File "D:\Abdulaziz\iColoriT\iColoriT\iColoriT_demo\icolorit_ui.py", line 44, in get_model model = create_model( File "D:\ProgramsFile\python\Python39\lib\site-packages\timm\models\factory.py", line 71, in create_model model = create_fn(pretrained=pretrained, pretrained_cfg=pretrained_cfg, **kwargs) File "D:\Abdulaziz\iColoriT\iColoriT\iColoriT_demo\modeling.py", line 566, in icolorit_base_4ch_patch16_224 model = IColoriT( TypeError: __init__() got an unexpected keyword argument 'pretrained_cfg'

So i fix it by addjusting the following code
model = create_fn(pretrained=pretrained, pretrained_cfg=pretrained_cfg, **kwargs)
to
model = create_fn(pretrained=pretrained, **kwargs)

in the factor.py file from timm package

training

Hello, i'm trying to re train the model, and I wanted to checkout if i'm understanding well the following instruction.
When you say: "First prepare an official ImageNet dataset with the following structure.
folder: train
โ”” id1
โ”” image1.JPEG
โ”” image2.JPEG
โ”” ...
โ”” id2
โ”” image1.JPEG
โ”” image2.JPEG
โ”” ... "

is the id1, for example, the "n01728572" from the n01728572.tar from the ILSVRC2012_img_train.tar?

TypeError: arguments did not match any overloaded call thrown when I attempt to click on the gui.

The demo launchs fine but, when I click on the drawing pad side to give a user input, the following error is thrown:

File "iColoriT_demo\gui\gui_gamut.py", line 74, in paintEvent
painter.drawLine(x - w, y, x + w, y)
TypeError: arguments did not match any overloaded call:
drawLine(self, QLineF): argument 1 has unexpected type 'float'
drawLine(self, QLine): argument 1 has unexpected type 'float'
drawLine(self, int, int, int, int): argument 1 has unexpected type 'float'
drawLine(self, QPoint, QPoint): argument 1 has unexpected type 'float'
drawLine(self, Union[QPointF, QPoint], Union[QPointF, QPoint]): argument 1 has unexpected type 'float'

About 'VAL_HINT_DIR'

Thank for your work!

When I want to train, I don't understand the use of 'VAL_HINT_DIR'. Must it be filled in?

No file found: debug/samples.pkl

when I try to run the train.sh. I got the following error:

Traceback (most recent call last):
File "train.py", line 276, in
main(args)
File "train.py", line 146, in main
dataset_train = build_pretraining_dataset(args)
File "/home/PycharmProjects/Year_2021/iColoriT-main/datasets.py", line 171, in build_pretraining_dataset
return ImageFolder(args.data_path, transform=transform)
File "/home/PycharmProjects/Year_2021/iColoriT-main/dataset_folder.py", line 248, in init
super(ImageFolder, self).init(root, loader, IMG_EXTENSIONS if is_valid_file is None else None,
File "/home/PycharmProjects/Year_2021/iColoriT-main/dataset_folder.py", line 123, in init
with open('debug/samples.pkl', 'rb') as f:

Could you tell me what is the meaning and function of this debug/samples.pkl?
For my own training dataset,how to create the corresponding samples.pkl?
Many thanks for your kind help!

how do you use multiple gpu (only in 1 server?)

Good afternoon.

Your code works great even with 1 GPU.
I want to set the gpu to 4 gpus but I tried to change --world_size 4 in argument of train.py but the system seems to use only 1 gpu.
If I want to use more than 1 gpu, what part of the code should I change or add on? It seems that your code has DDP around line 200.

Thank you in advance!

flops

Could you tell me how you measured FLOPs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.