kex0 / batch-face-swap Goto Github PK
View Code? Open in Web Editor NEWAutomaticaly detects faces and replaces them
Automaticaly detects faces and replaces them
Here's an example of broken output:
Here's an illustration of what's going wrong with the algorithm.
This happens because of the alpha compositing strategy: after cutting holes in the original image where all the faces are, then putting the generated tiles underneath so you can "see through" the holes to the images, it turns out that if the rectangular tile for a face is too large, it can extend to be visible through the hole for another face.
In this particular case, it happens because the image is small, so the padding is large relative to the faces. It can also happen from using square tiles with faces that are taller than they are wide; this will add extra packing horizontal.
Although it is possible to tweak the settings on individual images to avoid this, it's a batch processor so it would be better if it just got it right automatically, and it's straightforward to get it right automatically: simply alpha composite each generated face with its mask, rather than cutting all the masks out of a single image.
I implemented a basic version of this myself, but it didn't interact properly with batching so it's probably better if you write it yourself.
Here's the basic compositing step that I used to replace 'apply_overlay', where paste_loc now contains the per-face mask as well as the old values:
def apply_masked_face(face, paste_loc, final_image):
x, y, w, h, mask = paste_loc
base_image = Image.new('RGBA', (final_image.width, final_image.height))
face = images.resize_image(1, face, w, h)
base_image.paste(face, (x, y))
face = base_image
new_mask = ImageChops.multiply(face.getchannel("A"), mask)
face.putalpha(new_mask)
final_image = Image.alpha_composite(final_image, face)
return final_image
The multiply probably isn't necessary, you could just do face.putalpha(mask)
instead. I also didn't know what the case being handled for paste_loc=None is, so I didn't handle that.
To do this properly, you'll have to dilate each face mask independently, rather than dilating after merging. You'll still want to build the combined-mask "overlay" image for SD inpainting to work on so that if another face is visible in the padding region, the inpainting doesn't try to converge with it.
Another advantage to this is you can increase the padding size, which means maybe the inpainting can be more coherent, for example it may be able to infer gender and age from the hair and body if the padding extends far enough.
hello there!
I wonder if you could help with this.
Since your recent update / new UI I've been having odd issues. (as far as I know the timing coincides anyway)
The eye quality / face quality looks very odd - it's almost as if the 'restore faces' option is no longer working
I've been testing with inpaint upload on img2img (as this worked before for a long time)
is using DPM++ 2M Karras / 40 sampling steps and face restoration model is GFPGAN
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Automatic1111 Version: v1.3.0
Thanks in advance!
i was hoping to run a couple thousand images true this as the extension implies it handles batch processing but it seems to only allow 1 image at a time. could you add a batch process feature to this please?
SOLVED!
pip install mediapipe
win 11: how install mediapipe?
Error loading script: face_swap.py
Traceback (most recent call last):
File "F:\stable-diffusion-portable-main\modules\scripts.py", line 205, in load_scripts
module = script_loading.load_module(scriptfile.path)
File "F:\stable-diffusion-portable-main\modules\script_loading.py", line 13, in load_module
exec(compiled, module.dict)
File "F:\stable-diffusion-portable-main\extensions\batch-face-swap\scripts\face_swap.py", line 10, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'
installed via extensions tab, on restart UI console displays error. script does not appear in dropdown list. using directml auto1111 on AMD.
Error loading script: batch_face_swap.py
Traceback (most recent call last):
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\batch_face_swap.py", line 8, in
from bfs_utils import *
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\bfs_utils.py", line 6, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'
Error loading script: bfs_utils.py
Traceback (most recent call last):
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\bfs_utils.py", line 6, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'
Error loading script: face_detect.py
Traceback (most recent call last):
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\face_detect.py", line 8, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'
Could there be an option to prefix files with the original file name? Like the "Use original name for output filename during batch process in extras tab" option for the extras tab?
The extension works really well and I'm trying to use it to face swap all the faces in an mp4. But since the image files are processed in a non-sequential order, they end up out of order and I can't put them back together as an mp4 again.
Thanks!
python: 3.10.6 • torch: 2.0.0+cu118 • xformers: 0.0.19 • gradio: 3.28.1
Microsoft Edge
[--opt-sdp-attention is OK.]
Total progress: 22it [02:29, 3.52s/it]
Will process 1 images, generating 1 new images for each.
Found 1 face(s) in <PIL.Image.Image image mode=RGB size=944x944 at 0x1A77AB4C9A0>
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00, 10.01it/s]
Found 1 faces in 1 images in 0.953125 seconds.
Error completing request
Arguments: ('task(f4zd02y7lcvqlui)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=944x944 at 0x1A77AB4F070>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, True, 'img2img', False, '', '', False, 'Euler a', False, '2339HalfTheFishWas_v10.safetensors [7bf8da1368]', True, 0.5, True, 4, True, 32, '', False, 1, 'Both ▦', False, '', False, True, True, False, False, False, False, 1, False, '', '', '', 'generateMasksTab', 1, 4, 2.5, 30, 1.03, 1, 1, 5, 0.5, 5, False, True, False, 20, False, 'MultiDiffusion', False, 10, 1, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, True, True, True, False, 1536, 96, <controlnet.py.UiControlNetUnit object at 0x000001A77AD5A0E0>, '
CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 1.6, 0.97, 0.4, 0.15, 20, 0, 0, '', False, False, False, None, False, 50, 'dynamic_thresholding;dynamic_prompting', True, True, True, 2, 'Original', 32, 5, 512, 512, 0.1, True, True, 'Inner', 'Original', 'sam_vit_b_01ec64.pth', 'groundingdino_swint_ogc.pth', True, 32, 4, '', '', '', '', '', '', 0.4, 0.4, 0, 0, 0, 0, True, True, 16, 16, 'Text', 'Center', None, 100, 100, '', None, 'Trebuchet MS', 50, 10, 0.4, '') {}TORCH_USE_CUDA_DSA
to enable device-side assertions.Since afaik there is no way to do a batch job inpainting with a set of paired images and mask images, and since this first makes the mask then runs img2img, would it be possible to bypass the generation and provide a masks folder and an images folder, with files named the same, and then have this batch process them?
Is it possible to do a version of script that would work during txt2img so it would fix the face right after generating txt2img ?
I updated batch-face-swap to 8ca92d8 and also restarted my entire automatic1111 stable diffusion process, but I'm still getting the same error as earlier today:
Traceback (most recent call last):
File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(
File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\+++Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\face_swap.py", line 207, in generateMasks
allFiles = [os.path.join(path, f) for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
FileNotFoundError: [WinError 3] The system cannot find the path specified: ''
from bfs_utils import *
ModuleNotFoundError: No module named 'bfs_utils'
The results appear the same with either 'Whole Picture' or 'Only Masked' since updating from about a week ago. Adjusting the padding and disabling the override for padding did not change the results.
It wrote out the image at 512x512, which was also the tile size and which I assume is not a coincidence, but I don't know the cause. If I wan't in the middle of face detection stuff I'd track it down myself.
Note this causes weird stretching of the image. Only happened on some images and not others. Does happen when running this image by itself.
This wasn't happening with the old version from before the compositing change, and I verified this is happening on the current code without my modifications.
(I know this is a terrible image, I had bad hires fix settings when I made the original. It's part of my face detection test set because of the blurry faces in the background. And the face swapped version was just a quick test that ran for 5 steps.)
One idea in #26 is to rotate the faces so SD proceses a vertical face (since it makes worse faces at other angles), then rotate it back when compositing with the main image.
This seems like a good idea if it can be done easily. Since we use FaceMesh to detect faces, it might be possible to use the info we already get from it to figure out the 3D orientation of the face, figure out the best 2D orientation, and then do the above--but only if we can detect the rotated face in the first place.
Thanks kex0, this is a huge time saver! It works great until I try to use a style I've saved in 'Style 1'. If I do that each image takes progressively more and more time to process. Image 1 will take 2 seconds but the 10th image will take 60 seconds, the 20th will take 5 minutes and the 70th will take 5 hours. Do you have any idea what would cause this?
Trying to use predefined masks from "Masks directory"
Will process 1 images, generating 1 new images for each.
Error running process: /Users/user/Projects/ai/stable-diffusion-webui/extensions/batch-face-swap/scripts/batch_face_swap.py
Traceback (most recent call last):
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/scripts.py", line 451, in process
script.process(p, *script_args)
File "/Users/user/Projects/ai/stable-diffusion-webui/extensions/batch-face-swap/scripts/batch_face_swap.py", line 965, in process
finishedImages = generateImages(p, facecfg, input_image, input_path, searchSubdir, viewResults, int(divider), howSplit, saveMask, output_path, saveToOriginalFolder, onlyMask, saveNoFace, overridePrompt, bfs_prompt, bfs_nprompt, overrideSampler, sd_sampler, overrideModel, sd_model, overrideDenoising, denoising_strength, overrideMaskBlur, mask_blur, overridePadding, inpaint_full_res_padding, overrideSeed, overrideSteps, steps, overrideCfgScale, cfg_scale, overrideSize, bfs_width, bfs_height, invertMask, singleMaskPerImage, countFaces, maskWidth, maskHeight, keepOriginalName, pathExisting, pathMasksExisting, output_pathExisting, selectedTab, mainTab, loadGenParams, rotation_threshold)
File "/Users/user/Projects/ai/stable-diffusion-webui/extensions/batch-face-swap/scripts/batch_face_swap.py", line 546, in generateImages
finishedImages = faceSwap(p, masks, image, finishedImages, invertMask, forced_filename, output_pathExisting, info, selectedTab, mainTab, faces_info, rotation_threshold, overridePrompt, bfs_prompt, bfs_nprompt, overrideSampler, sd_sampler, overrideModel, sd_model, overrideDenoising, denoising_strength, overrideMaskBlur, mask_blur, overridePadding, inpaint_full_res_padding, overrideSeed, overrideSteps, steps, overrideCfgScale, cfg_scale, overrideSize, bfs_width, bfs_height,)
UnboundLocalError: local variable 'faces_info' referenced before assignment
Error completing request
Arguments: ('task(iura7z4o8dn4247)', 4, 'A a portrait person', 'cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)), ((extra limbs)), ((b&w)), weird colors, blurry', [], None, None, None, None, None, None, None, 100, 0, 4, 0, 1, False, False, 1, 1, 2, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, True, 'img2img', False, '', '', False, 'Euler a', False, 'realistic-vision.safetensors [f1d0443cbb]', True, 0.5, True, 4, True, 32, False, False, 30, False, 6, False, 512, 512, '/Users/user/Projects/video/faces/test/images', False, 1, 'Both ▦', False, '', False, True, True, False, False, False, False, 110, 120, False, '/Users/user/Projects/video/faces/test/images', '/Users/user/Projects/video/faces/test/masks', '', 'existingMasksTab', 4, 4, 2.5, 30, 1.03, 1, 1, 5, 0.5, 5, False, True, True, 20, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/img2img.py", line 178, in img2img
processed = process_images(p)
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/processing.py", line 610, in process_images
res = process_images_inner(p)
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/processing.py", line 670, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/processing.py", line 1184, in init
image = images.flatten(img, opts.img2img_background_color)
File "/Users/user/Projects/ai/stable-diffusion-webui/modules/images.py", line 710, in flatten
if img.mode == "RGBA":
AttributeError: 'NoneType' object has no attribute 'mode'
@kex0 any suggestions?
I tried to install it but without success ...it does not show up in the img2img with latest updates from automatic1111
Any ideas why ?
Error completing request
Arguments: ('task(paqpxqawcfrahpt)', 5, '', '', 'None', 'None', <PIL.Image.Image image mode=RGBA size=1334x1334 at 0x7F2E7936CF40>, None, None, None, None, <PIL.Image.Image image mode=RGB size=1334x1334 at 0x7F2E7936D2D0>, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '/home/mb/stable-diffusion-webui/imgs/', '', 9, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, True, True, '/home/mb/stable-diffusion-webui/imgs/', False, 1, 'Both ▦', 'Generate masks', False, '', False) {}
Traceback (most recent call last):
File "/home/mb/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/home/mb/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/mb/stable-diffusion-webui/modules/img2img.py", line 142, in img2img
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, args)
File "/home/mb/stable-diffusion-webui/modules/img2img.py", line 46, in process_batch
proc = modules.scripts.scripts_img2img.run(p, *args)
File "/home/mb/stable-diffusion-webui/modules/scripts.py", line 337, in run
processed = script.run(p, *script_args)
File "/home/mb/stable-diffusion-webui/extensions/batch-face-swap/scripts/face_swap.py", line 408, in run
image = Image.open(imgPath)
File "/home/mb/stable-diffusion-webui/venv/lib/python3.10/site-packages/PIL/Image.py", line 3227, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'img.jpg'
Hello, is it possible to replace the face of the specified picture? It is to input two pictures, one original picture and one selected picture, just like normal face change. I want to change it to be more real, but I have been looking at other projects recently, which are either too fake or take a long time
hello there!
I wonder if you could help with this.
Since your recent update / new UI I've been having some odd issues. (as far as I know the timing coincides anyway)
In this case -the first image generated is a grid within the face (same amount of images within the face as that batch number)
in the example above the batch count was 4 so the first image will have a grid of 4 faces placed within.
I do have 'Show previews of all images generated in a batch as a grid' enabled in settings but this was never an issue before.
I even tried turning that off, but still get the same issue
I've been testing with inpaint upload on img2img (as this worked before for a long time)
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Automatic1111 Version: v1.3.0
Thanks in advance!
I am getting this error:
ZeroDivisionError: division by zero
In the gui in automatic1111 below the image it show this message:
ZeroDivisionError: division by zero
Time taken: 1.31sTorch active/reserved: 4114/4236 MiB, Sys VRAM: 6642/24564 MiB (27.04%)
it does install but after apply and restart UI there is no Batch Face Swap script?
not an issue but I don't know how else to say THANKS for taking the time to code (and share) this!
I'm glad to see 'Keep original file name' added as an option. I like to make batches of 2-4 of each image and choose the best one, and with 'Keep original file name' enabled it produces one image with 2-4 pictures on it. (Which is cool and probably useful for something) But would it be possible to output 'original file name-001' 'original file name-002' 'original file name-003'?
EDIT: Actually, with 'Keep original file name' change BFS now produces a single image with any batch size (all results in one image), regardless if 'Keep original file name' is enabled or not. Hope that makes sense...
Works very well and i did that a lot manualy to improve weaker likeness of me but i wish we coud pause it and do other stuff so...
1 . Would be nice to be able to pause it , go back to previous file and run faceswap again on previous image if it did not came out that good.
2.Ability to have blur thats extending the mask so jawline also gets replaced and a bit of hairline too
3.Aligning faces especially rotated ones can help a lot cause it doesnt work too weel when face is rotated too much, worst best when face is in normal standing position without any rotation, so aligning it should help a lot.
4.When it detects multiple faces then swap each face in separate process instead of two faces at once cause it degrades likeness.
Thats pretty much it, also i would like to pause just to change settings and go on to next file in folder with adjusted denoise or cfg etc... it would be useful if i see that current settings dont work that great and i want to keep running it without starting over from firest file.
Many faces are munged on the first run of an image. If you can detect the face in a new gen, automatically re-run with the face-swap with the same parameters to clean up the face, except do it in txt2img instead of batch in img2img.
Guys does it take into consideration the mask size when we set it to +10 or -10 in faceswap panel ? I dont think so
In the code its p.mask_blur = int(math.ceil(0.01*height))
And when i set it to 10 then i can see straight border were the image ends abruptly
I think the blur should make the seam invisible but i dont think theres code for that currently?
So to the issue hapenns when you use default settings but set mask size to 10 and you get this
You can see the border of the mask ending abruptly, maybe extra code to fade that out would solve the issue ?
The reason i make the mask bigger is the haircut and hairline, on default 0 the mask border is still visible, i think updated mask blur code should fix this all
There is masking.py in modules folder, i think taking a look how native code blurs the borders would help as well.
SD generates best face and likeness when its close to native res or higher and degrades faces when res is below 480 (easy to test)so...
Do You guys upscale the face to 512x512 or whatever resolution is set in img2img ? Even if its low res face that has like 200x200 pixels , upscaling it to 512 or 768 would improve the quality, It would be a good idea to tinker with .
Another thing that helps with low res images is sharpening, what i do wheni have lowres image is i upscale it until face size is about 400-500 pixels, then i do sharpening so its a bit crispier detail, and then i inpaint the face , this really makes a difference , sharpening tick box maybe with a scale of how much , like 0.2, 0.3 would be nice to automate all this so i dont have to upscale on my own before inpainting the face .
Even better if You would be able to detect the face and also detect if its resolution is lower than 512, and iif it is, resize it so its 512 pixels wide.
Instead, just using a prompt give us the option to use a face/picture we want to use as base to be replaced on the output pictures.
I saw a web some time ago hat does it, and was excellent.
So we can do faceswap not only for famous ppl.
Will process 0 images, generating 1 new images for each.
Found 0 faces in 0 images in 4.34189999999994e-05 seconds.
Batch face changing does not work with Controlnet. For example, when there are open and closed eye avatars in the batch folder, and when Controlnet is enabled and Mediapipe is selected, all generated avatars are open.
Traceback (most recent call last):
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 399, in run
processed = script.run(p, *script_args)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\batch_face_swap.py", line 771, in run
finishedImages = generateImages(p, facecfg, path, searchSubdir, viewResults, int(divider), howSplit, saveMask, pathToSave, saveToOriginalFolder, onlyMask, saveNoFace, overrideDenoising, overrideMaskBlur, invertMask, singleMaskPerImage, countFaces, maskSize, keepOriginalName, pathExisting, pathMasksExisting, pathToSaveExisting, selectedTab, loadGenParams, rotation_threshold)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\batch_face_swap.py", line 386, in generateImages
masks, totalNumberOfFaces, faces_info, skip = findFaces(facecfg, image, width, height, divider, onlyHorizontal, onlyVertical, file, totalNumberOfFaces, singleMaskPerImage, countFaces, maskSize, skip)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\batch_face_swap.py", line 72, in findFaces
landmarkHull, face_info = getFacialLandmarkConvexHull(image, rect, onlyHorizontal, divider, small_width, small_height, small_image_index, facecfg)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\face_detect.py", line 137, in getFacialLandmarkConvexHull
height, width, channels = image.shape
ValueError: not enough values to unpack (expected 3, got 2)
Please add support of MacOS M1 detection and "mediapipe-silicon". Thank you!
Sometimes the range of face detection is not accurate enough, can I provide a face mask by drawing on the UI?
My suggestion is the reverse of what it does ;) Apply different face to a whole bunch if identical ones. Possibly via a prompt matrix where a bunch of facial features could be dropped as different prompts?
Thanks in all cases, this is a very cool and very useful extension.
I wonder if I can do this by expanding the mask?
Tried installing on the hot new Automatic1111 spinoff, https://github.com/vladmandic/automatic
but got an error
ERROR Calling script: H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py/ui: AttributeError
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ H:\automatic\modules\scripts.py:245 in wrap_call │
│ │
│ 244 │ try: │
│ ❱ 245 │ │ res = func(*args, **kwargs) │
│ 246 │ │ return res │
│ │
│ H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py:749 in ui │
│ │
│ 748 │ │ │ │ │ │ │ │ │ │ │ available_models = modules.sd_models.checkpo │
│ ❱ 749 │ │ │ │ │ │ │ │ │ │ │ sd_model = gr.Dropdown(label="SD Model", cho │
│ 750 │ │ │ │ │ │ │ │ │ │ │ modules.ui.create_refresh_button(sd_model, m │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'
16:08:50-736468 ERROR Calling script: H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py/ui: AttributeError
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ H:\automatic\modules\scripts.py:245 in wrap_call │
│ │
│ 244 │ try: │
│ ❱ 245 │ │ res = func(*args, **kwargs) │
│ 246 │ │ return res │
│ │
│ H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py:749 in ui │
│ │
│ 748 │ │ │ │ │ │ │ │ │ │ │ available_models = modules.sd_models.checkpo │
│ ❱ 749 │ │ │ │ │ │ │ │ │ │ │ sd_model = gr.Dropdown(label="SD Model", cho │
│ 750 │ │ │ │ │ │ │ │ │ │ │ modules.ui.create_refresh_button(sd_model, m │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'
It would be nice to generate a video with the generated frames in the output folder after batch.
I do it with the following code, the frames are arranged in chronological order:
each image is 0001.jpg 0002.jpg etc
import cv2
import numpy as np
import glob
size = (55,55)
img_array = []
for filename in sorted(glob.glob('/out_frames/2023-05-18/*')):
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
img_array.append(img)
out = cv2.VideoWriter('project2.avi',cv2.VideoWriter_fourcc(*'DIVX'), 15, size)
for i in range(len(img_array)):
out.write(img_array[i])
out.release()
however I must close the webui, run the code and reopen the webui.
It would be better to generate the video at the end of all the frames.
Sometimes you might want to do something like swap a particular face into a bunch of images, so the prompt gets specific about who it is. In this case, you probably don't want to face swap all the faces in the image, only the most prominent one (unless you're trying to make quintuplets or whatever).
But maybe the background faces are all clones, so you still want to face swap all the background faces as well, but with a different prompt. While you could investigate an implementation that has two prompts that it chooses between for different subsets of faces, for now a simpler approach is to just limit which faces get swapped, and you can run through the batch twice: the first time replacing the most prominent face, the second time the background faces.
To do this, I propose adding a dropdown to the UI that has these choices:
"Swap all faces",
"Swap only largest face",
"Swap all faces except largest",
"Swap only faces roughly in the foreground",
"Swap only faces not in the foreground",
"Largest face" is really a stand in for "closest face", since you don't know actual distances it's the best you can do. Also, there might be cases where the subject of the image is in the center, but there's a "background person" at the edge of the image whose face is slightly larger but they're clearly in the background, so maybe instead of "largest face" you need to pick "largest face that's towards the middle of the image" or something like that. Maybe call that "most prominent face". You can add that later though.
Can you make this work.from.the command lol be, without the GUI?
It is possible to make new param for mask padding?
Idea to enlarge mask, to generate new head with hair. Big Thanx!
Using batch swap with dynamic prompt. All images contains the same prompt in metadata even if generation was made from correct one (csv output still working properly)
What should I do if I want to use it to replace another specified face
Hello, this is a feature request. I like to take photos of my friends and inpaint their faces into funny or creative scenes like WH40k or the North Korean Navy. Can we invert the face search results so that it inpaint everything except the faces?
Including for cases where there are multiple faces. I would expect it to fuse the individual face detection masks into a single inverted mask that covers the entire source image. Thank you for your time considering this.
The mask selecting anything but the body does not apply the generation to the result image. Generating the mask via this script and using it normally with the inpaint upload works as intended.
This is the last working commit:
d729c01
Hi there @kex0 ,
I know I have made something like a feature request for the roop pr,but since you have this beautiful extension which I use all the time,is there any way of implementing inswapper model for a guided face swap mode,with some functionality of upscaling with gfpgan/codeformer?(maybe restore face option or another type of piping automatically-like using the extras tab with some sliders for gfpgan and codeformer to be used together). Or maybe we can redraw the face for higher resolution with SD models,something like inpainting models for enhancement,eliminating the pixelation caused by low res of 128 * 128 . Inpainting models have the ability use inpainting conditioning mask strength and they are very good with cfg scale thresholding with some really high cfg values,so low steps can be used with dpm ++ karras samplers.
I am hoping that there is a possibility for this. This can be used as a batch face swapping in the platform of Automatic which is very robost.,but with a different type of approach.
Ahh btw I am pretty sure that you are aware of the Midjourney is using something like this,but I don't know how they implemented it since it is not open source.
The developer of Unprompted extension, may contribute if he has time and interest(and ofc You as well). Can not say he will but he may. https://github.com/ThereforeGames/unprompted @ThereforeGames
Update : https://github.com/Ynn/sd-webui-faceswap just came out. I think this closes the request.
generate masks only, does not seem to detect all faces (as if "first face only" is selected, but it is not)
also skipping empty masks does not seem to work (toggle does not change output)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.