Giter Site home page Giter Site logo

vinairesearch / cpm Goto Github PK

View Code? Open in Web Editor NEW
355.0 8.0 57.0 11.32 MB

💄 Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)

Home Page: https://thaoshibe.github.io/CPM

License: Apache License 2.0

Python 99.33% Dockerfile 0.67%
makeup-transfer gan cyclegan makeup color cvpr2021 segmentation-models color-matching face-manipulation computer-vision

cpm's Issues

Unsatisfied results tested on customized images

Hello

I got some non-ideal results when testing the model on other images.

  1. As shown in the following image, there are some distinct boundaries on the resulting image. Any idea on this?

    image

    When diving into the code, I found you applied Face Detection before PRNet (See here). Because the image I input has been cropped, I commented those codes related to Face Detection in process() function (See below). However, the result turned out to be worse. It seems this Face Detection is necessary. Would you provide me with more explanation on this?

    image

    image

  2. When testing on my device (Tesla V100), it took several seconds to process an image, which is really time-consuming. Is this normal? If so, why will it be such time-consuming (it seems the model is not really complex)?

Looking forward to your reply.
Thanks.

Suggest to loosen the dependency on albumentations

Hi, your project CPM requires "albumentations==0.5.2" in its dependency. After analyzing the source code, we found that the following versions of albumentations can also be suitable without affecting your project, i.e., albumentations 0.5.1. Therefore, we suggest to loosen the dependency on albumentations from "albumentations==0.5.2" to "albumentations>=0.5.1,<=0.5.2" to avoid any possible conflict for importing more packages or for downstream projects that may use CPM.

May I pull a request to further loosen the dependency on albumentations?

By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



We also give our detailed analysis as follows for your reference:

Your project CPM directly uses 17 APIs from package albumentations.

albumentations.augmentations.transforms.MotionBlur.__init__, albumentations.augmentations.transforms.CLAHE.__init__, albumentations.imgaug.transforms.IAASharpen.__init__, albumentations.augmentations.transforms.RandomContrast.__init__, albumentations.imgaug.transforms.IAAPerspective.__init__, albumentations.augmentations.transforms.PadIfNeeded.__init__, albumentations.augmentations.transforms.Lambda.__init__, albumentations.core.composition.OneOf.__init__, albumentations.augmentations.transforms.RandomCrop.__init__, albumentations.core.composition.Compose.__init__, albumentations.augmentations.transforms.RandomBrightness.__init__, albumentations.imgaug.transforms.IAAAdditiveGaussianNoise.__init__, albumentations.augmentations.transforms.RandomGamma.__init__, albumentations.augmentations.transforms.HueSaturationValue.__init__, albumentations.augmentations.transforms.ShiftScaleRotate.__init__, albumentations.augmentations.transforms.Blur.__init__, albumentations.augmentations.transforms.HorizontalFlip.__init__

Beginning from the 17 APIs above, 14 functions are then indirectly called, including 13 albumentations's internal APIs and 1 outsider APIs. The specific call graph is listed as follows (neglecting some repeated function occurrences).

[/VinAIResearch/CPM]
+--albumentations.augmentations.transforms.MotionBlur.__init__
|      +--albumentations.augmentations.transforms.Blur.__init__
|      |      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      |      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.CLAHE.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.imgaug.transforms.IAASharpen.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.RandomContrast.__init__
|      +--albumentations.augmentations.transforms.RandomBrightnessContrast.__init__
|      |      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      |      +--albumentations.core.transforms_interface.to_tuple
|      +--warnings.warn
+--albumentations.imgaug.transforms.IAAPerspective.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.PadIfNeeded.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.augmentations.transforms.Lambda.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--warnings.warn
+--albumentations.core.composition.OneOf.__init__
|      +--albumentations.core.composition.BaseCompose.__init__
|      |      +--albumentations.core.composition.Transforms.__init__
|      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
|      |      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
+--albumentations.augmentations.transforms.RandomCrop.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.core.composition.Compose.__init__
|      +--albumentations.core.composition.BaseCompose.__init__
|      +--albumentations.augmentations.bbox_utils.BboxProcessor.__init__
|      |      +--albumentations.core.utils.DataProcessor.__init__
|      +--albumentations.core.composition.BboxParams.__init__
|      |      +--albumentations.core.utils.Params.__init__
|      +--albumentations.augmentations.keypoints_utils.KeypointsProcessor.__init__
|      |      +--albumentations.core.utils.DataProcessor.__init__
|      +--albumentations.core.composition.KeypointParams.__init__
|      |      +--albumentations.core.utils.Params.__init__
|      +--albumentations.core.composition.BaseCompose.add_targets
+--albumentations.augmentations.transforms.RandomBrightness.__init__
|      +--albumentations.augmentations.transforms.RandomBrightnessContrast.__init__
|      +--warnings.warn
+--albumentations.imgaug.transforms.IAAAdditiveGaussianNoise.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.RandomGamma.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.HueSaturationValue.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.ShiftScaleRotate.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.Blur.__init__
+--albumentations.augmentations.transforms.HorizontalFlip.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__

We scan albumentations's versions and observe that during its evolution between any version from [0.5.1] and 0.5.2, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

diff: 0.5.2(original) 0.5.1
['albumentations.augmentations.transforms.MedianBlur', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.targets_as_params', 'albumentations.augmentations.transforms.GaussianBlur', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.update_params', 'albumentations.pytorch.transforms.ToTensorV2', 'albumentations.pytorch.transforms.ToTensorV2.apply', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.get_params_dependent_on_targets', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists._preprocess_mask']

As for other packages, the APIs of warnings are called by albumentations in the call graph and the dependencies on these packages also stay the same in our suggested versions, thus avoiding any outside conflict.

Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==0.5.2" to "albumentations>=0.5.1,<=0.5.2". This will improve the applicability of CPM and reduce the possibility of any further dependency conflict with other projects.

a problem in colab

when I run it in colab, I faced a problem like this
b99d270aa1cbc4ed9f15d5a1290be93
Can you tell me how to solve it?

AttributeError: module 'segmentation_models_pytorch' has no attribute 'utils'

After following the install instructions and running the main.py script as instructed both locally and in colab...

os.chdir(path)
!python -W ignore main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png

I get the following error:

Traceback (most recent call last):
File "main.py", line 28, in
model = Makeup(args)
File "/content/gdrive/.shortcut-targets-by-id/1rZyAvaAtqZ9a0okVcv4OFaq9aiVvKg5Q/CPM/makeup.py", line 18, in init
self.pattern = Segmentor(args)
File "/content/gdrive/.shortcut-targets-by-id/1rZyAvaAtqZ9a0okVcv4OFaq9aiVvKg5Q/CPM/utils/models.py", line 15, in init
self.loss = smp.utils.losses.DiceLoss()
AttributeError: module 'segmentation_models_pytorch' has no attribute 'utils'

About the MT dataset

The dataset link proposed before seems to be broken.
Would you please share it again?
Thanks a lot for your time !

Requirements

Can you create a requirements.txt file and put it in git?

higher resolution

Thanks for sharing.

Is it possible to run this method on higher-resolution input? or 256 only?

Read-only CPM folder preventing me from running Colab example - Read-only file system: './result.png'

Hello,
As in your Colab instruction, I added a shortcut of CPM-Shared-Folder to my Drive. However, I don't have the permission to write to your Drive, so I see the following error when running your instruction:

Traceback (most recent call last):
  File "main.py", line 54, in <module>
    Image.fromarray((output).astype("uint8")).save(save_path)
  File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2131, in save
    fp = builtins.open(filename, "w+b")
OSError: [Errno 30] Read-only file system: './result.png'

Suggested solutions: Point savedir to another directory:

# Pattern + Color: Image will be saved in 'result.png'
os.chdir(path)
!python -W ignore main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --savedir=/content/

You should also update other commands.

image-invariant region mask

Thank you to share your work!
In Sec.3.2 of your paper, I find following description:
"This region mask is image-invariant and equals to a universal mask"
Does it mean all image share the same region mask? if so, can you share the region mask? I try to reproduce makeup transfer, can you show how to ues region mask to realize it? Thanks a lot.

RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

Hello,
I am trying to reimplementing the pattern branch and have this issue.
There is also a warning saying that "GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37."
I am using Ubuntu 21.04 and CUDA of version 11.2.
I this the error is due to incompatibility between my rtx3070 and pytorch 1.6. I think I need a combination of pytorch=1.8, torchvision=0.9 to get it work. Do you have an alternative environment for this? ty!

Setup throwing error

Was trying to setup in on local system but conda is throwing error:

Solving environment: failed

ResolvePackageNotFound:
  - libstdcxx-ng=9.3.0
  - gmp=6.2.1
  - openh264=2.1.1
  - _openmp_mutex=4.5
  - jasper=1.900.1
  - readline=8.1
  - graphite2=1.3.14
  - libuuid=1.0.3
  - libxcb=1.14
  - ncurses=6.2
  - libxkbcommon=1.0.3
  - libedit=3.1.20210216
  - ld_impl_linux-64=2.33.1
  - libnghttp2=1.43.0
  - nspr=4.30
  - lame=3.100
  - cupti=10.1.168
  - nss=3.63
  - libev=4.33
  - libgcc-ng=9.3.0
  - gnutls=3.6.13
  - dbus=1.13.18
  - nettle=3.6
  - libgfortran-ng=7.3.0

Also what should be the ideal architecture to deploy this model over cloud?

RuntimeError: Unsupported image type, must be 8bit gray or RGB image.

Traceback (most recent call last): File "main.py", line 34, in <module> model.prn_process(imgA) File "/data1/CPM/makeup.py", line 57, in prn_process self.pos = self.prn.process(self.face) File "/data1/CPM/utils/api.py", line 107, in process detected_faces = self.dlib_detect(image) File "/data1/CPM/utils/api.py", line 56, in dlib_detect return self.face_detector(image, 1) RuntimeError: Unsupported image type, must be 8bit gray or RGB image.

img_v2_e0aba768-04ac-4eb3-9722-8e845381477g
img_v2_230f2005-8a15-4dff-a850-babc4d6c9f9g

No module named 'segmentation_models_pytorch.unet'

i run the test i got this issue
⊱ ──────ஓ๑♡๑ஓ ────── ⊰ 🎵 hhey, arguments are here if you need to check 🎵 checkpoint_pattern: ./checkpoints/pattern.pth checkpoint_color: ./checkpoints/color.pth device: cuda prn: True color_only: True pattern_only: False input: ./imgs/non-makeup.png style: ./imgs/style-1.png alpha: 0.5 savedir:

raceback (most recent call last):
File "main.py", line 27, in
model = Makeup(args)
File "D:\AI\MakeUp\makeup.py", line 19, in init
self.pattern.test_model(args.checkpoint_pattern)
File "D:\AI\MakeUp\utils\models.py", line 46, in test_model
torch.load(path),
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 1012, in _legacy_load
result = unpickler.load()
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 828, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'segmentation_models_pytorch.unet'

Color Training

When i try to train the Color Branch,there is a error occurred.Could you give me a suggestion when you have time.
The error message is as follows:
Traceback (most recent call last):
File "create_beautygan_uv.py", line 51, in
uv_texture, uv_seg = generator.get_texture(image, seg)
File "/data/run01/scv0004/CPM-main/Color/texture_generator.py", line 23, in get_texture
pos[:, :, :2].astype(np.float32),
TypeError: 'NoneType' object is not subscriptable
image

training code

Thanks for your great work!
Will you release the training code ??

The performance of the code can not match the picture in README

Hi, Thanks for sharing the excellent work!

However, after successfully running the code, the results are visually worse than the demo picture in README. Are the bad results related to the code environment? Or the results are expected?

Demo picture in README:

result

Results(color+pattern) after I running:

c+p3

Results(color) after I running:
ColorOnly

Results(pattern) after I running:
patternonly

Training on new dataset

Hi, I really appreciate the work you have done and I want to train your color module on new dataset.
In CPM/Color/Readme.md, it says to re-train model on new dataset, one should follow the the instruction on BeautyGAN. I am wondering whether this means I should use the BeautyGAN network to train on new dataset and use the checkpoints from this as the new color module checkpoints, or I can simply train the color module in this CPM network?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.