Giter Site home page Giter Site logo

instcolorization's Introduction

[CVPR 2020] Instance-aware Image Colorization

Open In Colab

Image colorization is inherently an ill-posed problem with multi-modal uncertainty. Previous methods leverage the deep neural network to map input grayscale images to plausible color outputs directly. Although these learning-based methods have shown impressive performance, they usually fail on the input images that contain multiple objects. The leading cause is that existing models perform learning and colorization on the entire image. In the absence of a clear figure-ground separation, these models cannot effectively locate and learn meaningful object-level semantics. In this paper, we propose a method for achieving instance-aware colorization. Our network architecture leverages an off-the-shelf object detector to obtain cropped object images and uses an instance colorization network to extract object-level features. We use a similar network to extract the full-image features and apply a fusion module to full object-level and image-level features to predict the final colors. Both colorization networks and fusion modules are learned from a large-scale dataset. Experimental results show that our work outperforms existing methods on different quality metrics and achieves state-of-the-art performance on image colorization.

Instance-aware Image Colorization
Jheng-Wei Su, Hung-Kuo Chu, and Jia-Bin Huang
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.

Prerequisites

  • CUDA 10.1
  • Python3
  • Pytorch >= 1.5
  • Detectron2
  • OpenCV-Python
  • Pillow/scikit-image
  • Please refer to the env.yml for detail dependencies.

Getting Started

  1. Clone this repo:
git clone https://github.com/ericsujw/InstColorization
cd InstColorization
  1. Install conda.
  2. Install all the dependencies
conda env create --file env.yml
  1. Switch to the conda environment
conda activate instacolorization
  1. Install other dependencies
sh scripts/install.sh

Pretrained Model

  1. Download it from google drive.
sh scripts/download_model.sh
  1. Now the pretrained models would place in checkpoints.

Instance Prediction

Please follow the command below to predict all the bounding boxes fo the images in example folder.

python inference_bbox.py --test_img_dir example

All the prediction results would save in example_bbox folder.

Colorize Images

Please follow the command below to colorize all the images in example foler.

python test_fusion.py --name test_fusion --sample_p 1.0 --model fusion --fineSize 256 --test_img_dir example --results_img_dir results

All the colorized results would save in results folder.

Training the Model

Please follow this tutorial to train the colorization model.

License

This work is licensed under MIT License. See LICENSE for details.

Citation

If you find our code/models useful, please consider citing our paper:

@inproceedings{Su-CVPR-2020,
  author = {Su, Jheng-Wei and Chu, Hung-Kuo and Huang, Jia-Bin},
  title = {Instance-aware Image Colorization},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2020}
}

Acknowledgments

Our code borrows heavily from the amazing colorization-pytorch repository.

instcolorization's People

Contributors

ericsujw avatar jbhuang0604 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

instcolorization's Issues

Why are the output images very blurred and the resolution is very low?

I use the code:
python test_fusion.py --name test_fusion --sample_p 1.0 --model fusion --fineSize 256 --test_img_dir example --results_img_dir results

But I get results which are very blurred, I can't get the high-resolution vegetable image as in your colab. I can only get the blurred and square shape vegetable image. How can I solve it? Thank you!

More explanation regarding fusion module

Hi there,

I am working on re-implementing your paper in TensorFlow. The details you provided regarding Instance and full image networks are straight-forward. But I request you to please provide more instructions on how exactly to fuse these to get results. Especially I am confused about how to utilize the bounding box co-ordinates of each instance and marking the location inside the full image and training on it.

论文结果验证

你好,我用开源的代码跑了论文中的图,发现结果和论文展示的结果对不上,请问是否是哪里设置有问题呢?

Video colorization

Hi, great work!

Any chance to use your approach with videos? I've tried a little bit and now it's flickering a lot.

train very slow

third stage train very slow with v100. Is there something wrong?

图片

For train own dataset(stage = instance & fusion)

When I want to train my own dataset, I found that, only stage=full can run well, stage=instance or fusion will get any exception for that, get an empty tensor.

So, now I can get the last_net_G.pth well, BUT I can't get last_net_GF & last_net_Gcomp.

Thanks.

Google colab demo problem

Using unmodified Googlecolab I got this error:

model = create_model(opt)
model.setup_to_test('coco_finetuned_mask_256_ffs')

initialize network with normal
initialize network with normal
initialize network with normal
model [FusionModel] was created
load Fusion model from checkpoints/coco_finetuned_mask_256_ffs/latest_net_GF.pth

FileNotFoundError Traceback (most recent call last)
in ()
1 model = create_model(opt)
----> 2 model.setup_to_test('coco_finetuned_mask_256_ffs')

3 frames
/usr/local/lib/python3.6/dist-packages/torch/serialization.py in init(self, name, mode)
213 class _open_file(_opener):
214 def init(self, name, mode):
--> 215 super(_open_file, self).init(open(name, mode))
216
217 def exit(self, *args):

FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/coco_finetuned_mask_256_ffs/latest_net_GF.pth'

Resolve Package Not Found

Hi,

Thanks for the code! I am getting this issue whilst trying to install the dependencies when using this line conda env create --file env.yml is there any solution to this. I have tried adding them to pip section.

Thanks in advance

`Solving environment: failed

ResolvePackageNotFound:

  • libpng==1.6.37=hbc83047_0
  • tk==8.6.8=hbc83047_0
  • xz==5.2.4=h14c3975_4
  • dbus==1.13.12=h746ee38_0
  • cython==0.29.15=py37he6710b0_0
  • pyqt==5.9.2=py37h05f1152_2
  • sqlite==3.31.1=h7b6447c_0
  • libtiff==4.1.0=h2733197_0
  • openssl==1.1.1d=h516909a_0
  • libedit==3.1.20181209=hc058e9b_0
  • ncurses==6.1=he6710b0_1
  • kiwisolver==1.1.0=py37he6710b0_0
  • cudatoolkit==10.1.243=h6bb024c_0
  • zlib==1.2.11=h7b6447c_3
  • libxml2==2.9.9=hea5a465_1
  • freetype==2.9.1=h8a8886c_1
  • libstdcxx-ng==9.1.0=hdf63c60_0
  • qt==5.9.7=h5867ecd_1
  • numpy==1.18.1=py37h4f9e942_0
  • python==3.7.6=h0371630_2
  • ld_impl_linux-64==2.33.1=h53a641e_7
  • glib==2.63.1=h5a9c865_0
  • libgcc-ng==9.1.0=hdf63c60_0
  • sip==4.19.8=py37hf484d3e_0
  • libuuid==1.0.3=h1bed415_2
  • fontconfig==2.13.0=h9420a91_0
  • libgfortran-ng==7.3.0=hdf63c60_0
  • scipy==1.4.1=py37h0b6359f_0
  • jpeg==9b=h024ee3a_2
  • expat==2.2.6=he6710b0_0
  • gst-plugins-base==1.14.0=hbbd80ab_1
  • ninja==1.9.0=py37hfd86e86_0
  • ptyprocess==0.6.0=py37_0
  • matplotlib-base==3.1.3=py37hef1b27d_0
  • gstreamer==1.14.0=hb453b48_1
  • libffi==3.2.1=hd88cf55_4
  • pcre==8.43=he6710b0_0
  • zstd==1.3.7=h0b5b093_0
  • readline==7.0=h7b6447c_5
  • mkl_fft==1.0.15=py37ha843d7b_0
  • mkl-service==2.3.0=py37he904b0f_0
  • tornado==6.0.3=py37h7b6447c_3
  • cytoolz==0.10.1=py37h7b6447c_0
  • icu==58.2=h9c2bf20_1
  • pywavelets==1.1.1=py37h7b6447c_0
  • numpy-base==1.18.1=py37hde5b4d6_1
  • mkl_random==1.1.0=py37hd6b4f25_0
  • pillow==7.0.0=py37hb39fc2d_0
  • scikit-learn==0.22.1=py37hd81dba3_0
  • libxcb==1.13=h1bed415_1
  • scikit-image==0.16.2=py37h0573a6f_0`

How to achieve your test result on ImageNet?

Hi, thanks for sharing your code.

I followed your test step and try to test on the ImageNet test set. And I found the result is different from what you released on the website https://ericsujw.github.io/InstColorization/

Could you provide detailed steps on how to use your pretrained model to obtain the test result on ImageNet? Do you use the same setting to test on ImageNet as the setting on COCO-Stuff?

FineTuning Model

First of all, great job on the project! I was wondering if there were any implemented functionality to finetune this model? Basically introducing another dataset to train on top of the already trained model.

Thanks in advance.

detection2

in windows how to import it.when i use this poject in windows,whre should i change

How to trian the model?

Great work! And thanks for sharing the code.
I tried to combine another colorization method to this segmentation-colorization-fusion framework on a ubuntu computer with a GTX1080Ti GPU, but I'm confused that I did not find the file for training these models in your repo.
Could you share how you trained the models?

IndexError: too many indices for tensor of dimension 3

Firstly, thanks for your code sharing. I use your partly code to convert image from RGB to Lab colorspace. I didn't change your code, but I encountered an error.

File "/content/drive/My Drive/BICYCLEGAN/dataset.py", line 36, in __getitem__
    img_lab = rgb2lab(img)
  File "/content/drive/My Drive/BICYCLEGAN/colorspace.py", line 126, in rgb2lab
    lab = xyz2lab(rgb2xyz(rgb))
  File "/content/drive/My Drive/BICYCLEGAN/colorspace.py", line 45, in rgb2xyz
    x = .412453 * rgb[:, 0, :, :] + .357580 * rgb[:, 1, :, :] + .180423 * rgb[:, 2, :, :]
**IndexError: too many indices for tensor of dimension 3**

Could you give me some advice to solve this problem? Thanks.

fusion_dataset.py

hi.I am confused of the meaning of box_info_2x/ box_info_4x/ box_info_8x in fusion_dataset.py.Any help would be greatly appreciated

Running on CPU

Is there any way to run the script on a machine that doesn't have GPUs?

Prediction on full size image without transformation ?

Hi !
I launched a training session on your model and I have few questions if I may:

  • Training seems to only output latest_net_G.pth (I get an error FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/coco_mask/latest_net_GF.pth' if I don't copy your models weight when running FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/coco_mask/latest_net_GF.pth')
  • When I finally use your model data, I end up having another issue: it only seems to output 256 (or 512, 768, etc.) px wide and squared image while I'd need to run it on "real world" data. Is there something I need to do for this ?

[Detectron2] RuntimeError: CUDA error: no kernel image is available for execution on the device

Hi,

Thanks for the great work! I am reproducing your work following README_TRAIN.md step by step, but encountered an error while running the Instance Prediction step:

sh scripts/prepare_train_box.sh

The error I got was:

Traceback (most recent call last):
  File "inference_bbox.py", line 51, in <module>
    outputs = predictor(l_stack)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/detectron2/engine/defaults.py", line 218, in __call__
    predictions = self.model([inputs])[0]
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/detectron2/modeling/meta_arch/rcnn.py", line 108, in forward
    return self.inference(batched_inputs)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/detectron2/modeling/meta_arch/rcnn.py", line 161, in inference
    features = self.backbone(images.tensor)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/detectron2/modeling/backbone/fpn.py", line 123, in forward
    bottom_up_features = self.bottom_up(x)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/detectron2/modeling/backbone/resnet.py", line 454, in forward
    x = self.stem(x)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ztu/miniconda3/envs/instacolorization/lib/python3.7/site-packages/detectron2/modeling/backbone/resnet.py", line 391, in forward
    x = F.relu_(x)
RuntimeError: CUDA error: no kernel image is available for execution on the device

Do you know how to fix this problem? Thanks!

FileNotFoundError in Colab Demo

Using the unmodified Google Colab Demo.
I think it has something to do with the zip file not being downloaded correctly. The size of the zip is only around 3KB. Maybe the Google Drive download is being rate limited?

Error:

initialize network with normal
initialize network with normal
initialize network with normal
model [FusionModel] was created
load Fusion model from checkpoints/coco_finetuned_mask_256_ffs/latest_net_GF.pth
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-11-e948e7e9ae19> in <module>()
      1 model = create_model(opt)
----> 2 model.setup_to_test('coco_finetuned_mask_256_ffs')

3 frames
/content/InstColorization/models/fusion_model.py in setup_to_test(self, fusion_weight_path)
     95         GF_path = 'checkpoints/{0}/latest_net_GF.pth'.format(fusion_weight_path)
     96         print('load Fusion model from %s' % GF_path)
---> 97         GF_state_dict = torch.load(GF_path)
     98 
     99         # G_path = 'checkpoints/coco_finetuned_mask_256/latest_net_G.pth' # fine tuned on cocostuff

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
    582         pickle_load_args['encoding'] = 'utf-8'
    583 
--> 584     with _open_file_like(f, 'rb') as opened_file:
    585         if _is_zipfile(opened_file):
    586             with _open_zipfile_reader(f) as opened_zipfile:

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in _open_file_like(name_or_buffer, mode)
    232 def _open_file_like(name_or_buffer, mode):
    233     if _is_path(name_or_buffer):
--> 234         return _open_file(name_or_buffer, mode)
    235     else:
    236         if 'w' in mode:

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in __init__(self, name, mode)
    213 class _open_file(_opener):
    214     def __init__(self, name, mode):
--> 215         super(_open_file, self).__init__(open(name, mode))
    216 
    217     def __exit__(self, *args):

FileNotFoundError: [Errno 2] No such file or directory: 'checkpoints/coco_finetuned_mask_256_ffs/latest_net_GF.pth'

Checkpoints and COCOStuff setting

Hello, after reading you paper ,I want to run the code.But,I meet some questions.
1.I dont kown the checkpoints location. The struction is checkpoints/checkpoints /coco_fine......,or checkpoints/coco_fine....
2.The cocostuff dataset location is train_data/cocostuff/train2017.zip?
srroy,my code ability not good,so have these simple questions.

Chinese follow:
您好!我不太清楚这个预训练模型具体的摆放位置,是checkpoints文件夹下直接就是预训练的3个模型的文件夹,还是checkpoints->checkpoints->三个预训练模型文件夹。
还有就是这个cocostuff数据集的位置是,train_data/cocostuff,还是train_data/train.
请您指教

When I use my datasets to train, I meet so many error in test

I follow this to train: https://github.com/ericsujw/InstColorization/blob/master/README_TRAIN.md

When I run
python test_fusion.py --name test_fusion --sample_p 1.0 --model fusion --fineSize 256 --test_img_dir example --results_img_dir results

I meet so many error:

#Testing images = 376
initialize network with normal
initialize network with normal
initialize network with normal
model [FusionModel] was created
load Fusion model from checkpoints/coco_mask/latest_net_GF.pth
0%| | 0/376 [00:00<?, ?it/s]Traceback (most recent call last):
File "", line 1, in
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/data/likengpeng/InstColorization/test_fusion.py", line 71, in
lab_image = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

Traceback (most recent call last):
File "", line 1, in
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="mp_main")
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/data/likengpeng/InstColorization/test_fusion.py", line 71, in
lab_image = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

Traceback (most recent call last):
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 761, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/queues.py", line 104, in get
if not self._poll(timeout):
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/connection.py", line 414, in _poll
r = wait([self], timeout)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/multiprocessing/connection.py", line 920, in wait
ready = selector.select(timeout)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 55739) exited unexpectedly with exit code 1. Details are lost due to multiprocessing. Rerunning with num_workers=0 may give better error trace.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "test_fusion.py", line 41, in
for data_raw in tqdm(dataset_loader, dynamic_ncols=True):
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/tqdm/std.py", line 1108, in iter
for obj in iterable:
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 841, in _next_data
idx, data = self._get_data()
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 808, in _get_data
success, data = self._try_get_data()
File "/home/amax/anaconda3/envs/instacolorization/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 774, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 55739) exited unexpectedly
0%| | 0/376 [00:02<?, ?it/s]

How can I solve them? Help me please!

How to train with my own dataset?

Hi!
I would like to know, how to use the code to train with my own dataset?
Is there a code to run the training step in the repository?

Best regards,

Matheus Santos.

Render higher resolutions

First of all – thanks very much for making your code available. Impressive work! Very appreciated.

I tried to render higher resolutions than the default 256 x 256px and changed the argparse variables like so:

opt.fineSize = 512
opt.loadSize = 512

However, the Colab Pro notebook yields this error:

RuntimeError: CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 15.90 GiB total capacity; 14.61 GiB already allocated; 502.94 MiB free; 14.70 GiB reserved in total by PyTorch)

How can I fix this or render higher resolutions in another way?

Thanks in advance!

Fusion Dataset error

Hi, I have a question about my error.

I just run the train.py for fusion and had a error like this.
" stack expects each tensor to be equal size, but got [2, 3, 256, 256] at entry 0 and [8, 3, 256, 256] at entry 1 "
How can I fix it?

Colab notebook doesn't work

The colab env doesn't support requirement packages so installation cell fails:

Looking in links: https://download.pytorch.org/whl/cu101/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.5 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.1.2)
ERROR: No matching distribution found for torch==1.5
Requirement already satisfied: cython in /usr/local/lib/python3.10/dist-packages (3.0.8)
Collecting pyyaml==5.1
  Using cached PyYAML-5.1.tar.gz (274 kB)
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  Preparing metadata (setup.py) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Collecting git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI
  Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-688i0rmv
  Running command git clone --filter=blob:none --quiet https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-688i0rmv
  Resolved https://github.com/cocodataset/cocoapi.git to commit 8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9
  Preparing metadata (setup.py) ... done
Requirement already satisfied: setuptools>=18.0 in /usr/local/lib/python3.10/dist-packages (from pycocotools==2.0) (67.7.2)
Requirement already satisfied: cython>=0.27.3 in /usr/local/lib/python3.10/dist-packages (from pycocotools==2.0) (3.0.8)
Requirement already satisfied: matplotlib>=2.1.0 in /usr/local/lib/python3.10/dist-packages (from pycocotools==2.0) (3.7.1)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (4.47.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.4.5)
Requirement already satisfied: numpy>=1.20 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.23.5)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (23.2)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (9.4.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (3.1.1)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib>=2.1.0->pycocotools==2.0) (1.16.0)
Requirement already satisfied: dominate==2.4.0 in /usr/local/lib/python3.10/dist-packages (2.4.0)
Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
ERROR: Could not find a version that satisfies the requirement detectron2==0.1.2 (from versions: none)
ERROR: No matching distribution found for detectron2==0.1.2

How to color if there is no instance

Hello, I would like to ask, if there is no instance in the picture, that is to say, the preprocessing detection network thinks that the picture has no object, then this picture cannot be colored?

Training settings

Hello, I've read your paper recently. In this paper, you mentioned that the training epoch is set as 2, 5 and 2, but the source code is set 150. I want to know which is correct? Thank you!

Built error at Ubuntu 18.04.3 LTS, ERROR: detectron2==0.1.2+cu101

Hi, I know this by some technology blog, and this is the best colorization project I have seen

I tried to run it on my personal ubuntu laptop, but I got this problem, I also tried use the docker environment(https://hub.docker.com/r/continuumio/anaconda3) to execute conda env create --file env.yml --name instacolorization and got this error as same, can you support me some information about this?

['/opt/conda/envs/instacolorization/bin/python', '-m', 'pip', 'install', '-U', '-r', '/opt/InstColorization/condaenv.y2bkp0vm.requirements.txt']
Pip subprocess output:
Collecting absl-py==0.9.0
Downloading absl-py-0.9.0.tar.gz (104 kB)
Collecting cachetools==4.1.0
Downloading cachetools-4.1.0-py3-none-any.whl (10 kB)
Collecting chardet==3.0.4
Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement detectron2==0.1.2+cu101 (from -r /opt/InstColorization/condaenv.y2bkp0vm.requirements.txt (line 4)) (from versions: none)
ERROR: No matching distribution found for detectron2==0.1.2+cu101 (from -r /opt/InstColorization/condaenv.y2bkp0vm.requirements.txt (line 4))

CondaEnvException: Pip failed

Thanks.

Is there any way to make the result image size consistent with the input image size?

It's a great job. Thank you !
Here ,I have two questions to ask your advice:
(1)My input image size is different such as 419* 320 , 520* 400.... ,but I found the size of result image all are 512*512,Is there any way to make the result image size consistent with the input image size?
(2) I have changed the Parameter:fineSize=512
for examle:
python test_fusion.py --name test_fusion --sample_p 1.0 --model fusion --fineSize 512 --test_img_dir example --results_img_dir results
,but Program error display:
"RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 10.91 GiB total capacity; 9.73 GiB already allocated; 56.38 MiB free; 9.83 GiB reserved in total by PyTorch)"
My GPU is 1080Ti.

The current output image size seems to be too small,How do I fix this problem?

Question- Color space

Hey @jbhuang0604,

Good job and thanks for sharing your code! I have a question about color space. Why do you select CIE Lab color space in order to predict the missing color channels (what are the intention behind this decision or what are the advantages of CIE Lab color space compare to other color spaces)?

Thanks!

Color normalization

Hello.

I have a question about the color conversion and normalization.

Can you tell me what is the range of lab values after the conversion and why did you normalize 'L' value by subtracting 50 and divided by 100?
Also, why did you normalize 'ab' values by dividing them to 110?

Thank you in advance.

PSNR and SSIM evalution question

Hello, could you realse more details about how to calculate the PSNR and SSIM? I tested the RGB results on your website which are resized to 256 in size with PSNR and SSIM function in skimage, but found all results of the existing works lower than listed in the paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.