vinairesearch / cpm Goto Github PK
View Code? Open in Web Editor NEW💄 Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)
Home Page: https://thaoshibe.github.io/CPM
License: Apache License 2.0
💄 Lipstick ain't enough: Beyond Color-Matching for In-the-Wild Makeup Transfer (CVPR 2021)
Home Page: https://thaoshibe.github.io/CPM
License: Apache License 2.0
Hi. I'm curious about the partial makeup transfer, could you please tell how to realize it?
Hi,there are no uv-map texture results after i runned this code.It only produces an empty folder named save_folder.
What's the matter, please?
Hello
I got some non-ideal results when testing the model on other images.
As shown in the following image, there are some distinct boundaries on the resulting image. Any idea on this?
When diving into the code, I found you applied Face Detection before PRNet (See here). Because the image I input has been cropped, I commented those codes related to Face Detection in process() function (See below). However, the result turned out to be worse. It seems this Face Detection is necessary. Would you provide me with more explanation on this?
When testing on my device (Tesla V100), it took several seconds to process an image, which is really time-consuming. Is this normal? If so, why will it be such time-consuming (it seems the model is not really complex)?
Looking forward to your reply.
Thanks.
Hi, your project CPM requires "albumentations==0.5.2" in its dependency. After analyzing the source code, we found that the following versions of albumentations can also be suitable without affecting your project, i.e., albumentations 0.5.1. Therefore, we suggest to loosen the dependency on albumentations from "albumentations==0.5.2" to "albumentations>=0.5.1,<=0.5.2" to avoid any possible conflict for importing more packages or for downstream projects that may use CPM.
May I pull a request to further loosen the dependency on albumentations?
By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?
We also give our detailed analysis as follows for your reference:
Your project CPM directly uses 17 APIs from package albumentations.
albumentations.augmentations.transforms.MotionBlur.__init__, albumentations.augmentations.transforms.CLAHE.__init__, albumentations.imgaug.transforms.IAASharpen.__init__, albumentations.augmentations.transforms.RandomContrast.__init__, albumentations.imgaug.transforms.IAAPerspective.__init__, albumentations.augmentations.transforms.PadIfNeeded.__init__, albumentations.augmentations.transforms.Lambda.__init__, albumentations.core.composition.OneOf.__init__, albumentations.augmentations.transforms.RandomCrop.__init__, albumentations.core.composition.Compose.__init__, albumentations.augmentations.transforms.RandomBrightness.__init__, albumentations.imgaug.transforms.IAAAdditiveGaussianNoise.__init__, albumentations.augmentations.transforms.RandomGamma.__init__, albumentations.augmentations.transforms.HueSaturationValue.__init__, albumentations.augmentations.transforms.ShiftScaleRotate.__init__, albumentations.augmentations.transforms.Blur.__init__, albumentations.augmentations.transforms.HorizontalFlip.__init__
Beginning from the 17 APIs above, 14 functions are then indirectly called, including 13 albumentations's internal APIs and 1 outsider APIs. The specific call graph is listed as follows (neglecting some repeated function occurrences).
[/VinAIResearch/CPM]
+--albumentations.augmentations.transforms.MotionBlur.__init__
| +--albumentations.augmentations.transforms.Blur.__init__
| | +--albumentations.core.transforms_interface.BasicTransform.__init__
| | +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.CLAHE.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.imgaug.transforms.IAASharpen.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.RandomContrast.__init__
| +--albumentations.augmentations.transforms.RandomBrightnessContrast.__init__
| | +--albumentations.core.transforms_interface.BasicTransform.__init__
| | +--albumentations.core.transforms_interface.to_tuple
| +--warnings.warn
+--albumentations.imgaug.transforms.IAAPerspective.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.PadIfNeeded.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.augmentations.transforms.Lambda.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--warnings.warn
+--albumentations.core.composition.OneOf.__init__
| +--albumentations.core.composition.BaseCompose.__init__
| | +--albumentations.core.composition.Transforms.__init__
| | | +--albumentations.core.composition.Transforms._find_dual_start_end
| | | | +--albumentations.core.composition.Transforms._find_dual_start_end
+--albumentations.augmentations.transforms.RandomCrop.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.core.composition.Compose.__init__
| +--albumentations.core.composition.BaseCompose.__init__
| +--albumentations.augmentations.bbox_utils.BboxProcessor.__init__
| | +--albumentations.core.utils.DataProcessor.__init__
| +--albumentations.core.composition.BboxParams.__init__
| | +--albumentations.core.utils.Params.__init__
| +--albumentations.augmentations.keypoints_utils.KeypointsProcessor.__init__
| | +--albumentations.core.utils.DataProcessor.__init__
| +--albumentations.core.composition.KeypointParams.__init__
| | +--albumentations.core.utils.Params.__init__
| +--albumentations.core.composition.BaseCompose.add_targets
+--albumentations.augmentations.transforms.RandomBrightness.__init__
| +--albumentations.augmentations.transforms.RandomBrightnessContrast.__init__
| +--warnings.warn
+--albumentations.imgaug.transforms.IAAAdditiveGaussianNoise.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.RandomGamma.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.HueSaturationValue.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.ShiftScaleRotate.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
| +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.Blur.__init__
+--albumentations.augmentations.transforms.HorizontalFlip.__init__
| +--albumentations.core.transforms_interface.BasicTransform.__init__
We scan albumentations's versions and observe that during its evolution between any version from [0.5.1] and 0.5.2, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).
diff: 0.5.2(original) 0.5.1
['albumentations.augmentations.transforms.MedianBlur', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.targets_as_params', 'albumentations.augmentations.transforms.GaussianBlur', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.update_params', 'albumentations.pytorch.transforms.ToTensorV2', 'albumentations.pytorch.transforms.ToTensorV2.apply', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.get_params_dependent_on_targets', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists._preprocess_mask']
As for other packages, the APIs of warnings are called by albumentations in the call graph and the dependencies on these packages also stay the same in our suggested versions, thus avoiding any outside conflict.
Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==0.5.2" to "albumentations>=0.5.1,<=0.5.2". This will improve the applicability of CPM and reduce the possibility of any further dependency conflict with other projects.
No module named 'blend_modes
After following the install instructions and running the main.py script as instructed both locally and in colab...
os.chdir(path)
!python -W ignore main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png
I get the following error:
Traceback (most recent call last):
File "main.py", line 28, in
model = Makeup(args)
File "/content/gdrive/.shortcut-targets-by-id/1rZyAvaAtqZ9a0okVcv4OFaq9aiVvKg5Q/CPM/makeup.py", line 18, in init
self.pattern = Segmentor(args)
File "/content/gdrive/.shortcut-targets-by-id/1rZyAvaAtqZ9a0okVcv4OFaq9aiVvKg5Q/CPM/utils/models.py", line 15, in init
self.loss = smp.utils.losses.DiceLoss()
AttributeError: module 'segmentation_models_pytorch' has no attribute 'utils'
The dataset link proposed before seems to be broken.
Would you please share it again?
Thanks a lot for your time !
Can you create a requirements.txt file and put it in git?
Thanks for sharing.
Is it possible to run this method on higher-resolution input? or 256 only?
First of all, thanks for your good work.
I think all of links of MT(makeup transfer) dataset in webs are broken(http://liusi-group.com/projects/BeautyGAN).
If you have no issues for that, can you share a MT Dataset?
Thank you.
Hello,
As in your Colab instruction, I added a shortcut of CPM-Shared-Folder to my Drive. However, I don't have the permission to write to your Drive, so I see the following error when running your instruction:
Traceback (most recent call last):
File "main.py", line 54, in <module>
Image.fromarray((output).astype("uint8")).save(save_path)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2131, in save
fp = builtins.open(filename, "w+b")
OSError: [Errno 30] Read-only file system: './result.png'
Suggested solutions: Point savedir
to another directory:
# Pattern + Color: Image will be saved in 'result.png'
os.chdir(path)
!python -W ignore main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --savedir=/content/
You should also update other commands.
Thank you to share your work!
In Sec.3.2 of your paper, I find following description:
"This region mask is image-invariant and equals to a universal mask"
Does it mean all image share the same region mask? if so, can you share the region mask? I try to reproduce makeup transfer, can you show how to ues region mask to realize it? Thanks a lot.
Hello,
I am trying to reimplementing the pattern branch and have this issue.
There is also a warning saying that "GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37."
I am using Ubuntu 21.04 and CUDA of version 11.2.
I this the error is due to incompatibility between my rtx3070 and pytorch 1.6. I think I need a combination of pytorch=1.8, torchvision=0.9 to get it work. Do you have an alternative environment for this? ty!
Was trying to setup in on local system but conda is throwing error:
Solving environment: failed
ResolvePackageNotFound:
- libstdcxx-ng=9.3.0
- gmp=6.2.1
- openh264=2.1.1
- _openmp_mutex=4.5
- jasper=1.900.1
- readline=8.1
- graphite2=1.3.14
- libuuid=1.0.3
- libxcb=1.14
- ncurses=6.2
- libxkbcommon=1.0.3
- libedit=3.1.20210216
- ld_impl_linux-64=2.33.1
- libnghttp2=1.43.0
- nspr=4.30
- lame=3.100
- cupti=10.1.168
- nss=3.63
- libev=4.33
- libgcc-ng=9.3.0
- gnutls=3.6.13
- dbus=1.13.18
- nettle=3.6
- libgfortran-ng=7.3.0
Also what should be the ideal architecture to deploy this model over cloud?
Thank you very much for your great work!
Can you publish data preprocessing code?
Is there no code to extract the mask or I missed it
Traceback (most recent call last): File "main.py", line 34, in <module> model.prn_process(imgA) File "/data1/CPM/makeup.py", line 57, in prn_process self.pos = self.prn.process(self.face) File "/data1/CPM/utils/api.py", line 107, in process detected_faces = self.dlib_detect(image) File "/data1/CPM/utils/api.py", line 56, in dlib_detect return self.face_detector(image, 1) RuntimeError: Unsupported image type, must be 8bit gray or RGB image.
Dear Author:
I follow your running steps, but it raises an erros "ValueError: The passed save_path is not a valid checkpoint: ./PRNet/net-data/256_256_resfcn256_weight"
color.pth, pattern.pth and 256_256_resfcn256_weight.data-00000-of-00001 are in the correct files.
Can you give me some advice?
Thank you!
i run the test i got this issue
⊱ ──────ஓ๑♡๑ஓ ────── ⊰ 🎵 hhey, arguments are here if you need to check 🎵 checkpoint_pattern: ./checkpoints/pattern.pth checkpoint_color: ./checkpoints/color.pth device: cuda prn: True color_only: True pattern_only: False input: ./imgs/non-makeup.png style: ./imgs/style-1.png alpha: 0.5 savedir:
raceback (most recent call last):
File "main.py", line 27, in
model = Makeup(args)
File "D:\AI\MakeUp\makeup.py", line 19, in init
self.pattern.test_model(args.checkpoint_pattern)
File "D:\AI\MakeUp\utils\models.py", line 46, in test_model
torch.load(path),
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 1012, in _legacy_load
result = unpickler.load()
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 828, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'segmentation_models_pytorch.unet'
solved it~~
Thank you for releasing your nice work.
It seems the link to the MT-Dataset is not available. Could you please share your downloaded one or any accessible link?
When i try to train the Color Branch,there is a error occurred.Could you give me a suggestion when you have time.
The error message is as follows:
Traceback (most recent call last):
File "create_beautygan_uv.py", line 51, in
uv_texture, uv_seg = generator.get_texture(image, seg)
File "/data/run01/scv0004/CPM-main/Color/texture_generator.py", line 23, in get_texture
pos[:, :, :2].astype(np.float32),
TypeError: 'NoneType' object is not subscriptable
Thanks for your great work!
Will you release the training code ??
Hi, Thanks for sharing the excellent work!
However, after successfully running the code, the results are visually worse than the demo picture in README. Are the bad results related to the code environment? Or the results are expected?
Demo picture in README:
Results(color+pattern) after I running:
Hi, I really appreciate the work you have done and I want to train your color module on new dataset.
In CPM/Color/Readme.md, it says to re-train model on new dataset, one should follow the the instruction on BeautyGAN. I am wondering whether this means I should use the BeautyGAN network to train on new dataset and use the checkpoints from this as the new color module checkpoints, or I can simply train the color module in this CPM network?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.