Giter Site home page Giter Site logo

batch-face-swap's Introduction

!!! This project is no longer being maintained !!!

Automaticaly detects faces and replaces them.

preview

Installation

Automatic:

  1. In the WebUI go to Extensions.
  2. Open Available tab and click Load from: button.
  3. Find Batch Face Swap and click Install.
  4. Apply and restart UI

Manual:

  1. Use git clone https://github.com/kex0/batch-face-swap.git from your SD web UI /extensions folder.
  2. Open requirements_versions.txt in the main SD web UI folder and add mediapipe.
  3. Start or reload SD web UI.

txt2img Guide

  1. Expand the Batch Face Swap tab in the lower left corner.
    Image
  2. Click the checkbox to enable it.
    Image
  3. Click Generate

img2img Guide

  1. Expand the Batch Face Swap tab in the lower left corner.
    Image
  2. Click the checkbox to enable it.
    Image
  3. You can process either 1 image at a time by uploading your image at the top of the page.
    Image
    Or you can give it path to a folder containing your images.
    Image
  4. Click Generate

Override options only affect face generation so for example in txt2img you can generate the initial image with one prompt and face swap with another. Or generate the initial image with one model and faceswap with another.

Example

Left 'young woman in red dress' using chilloutMix Right 'Emma Watson in red dress' using realisticVision

chrome_XSjamNtABV

Example

example

Prompt:

detailed closeup photo of Emma Watson, 35mm, dslr
Negative prompt: (painting:1.3), (concept art:1.2), artstation, sketch, illustration, drawing, blender, octane, 3d, render, blur, smooth, low-res, grain, cartoon, watermark, text, out of focus

batch-face-swap's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

batch-face-swap's Issues

No such file or directory

Error completing request
Arguments: ('task(paqpxqawcfrahpt)', 5, '', '', 'None', 'None', <PIL.Image.Image image mode=RGBA size=1334x1334 at 0x7F2E7936CF40>, None, None, None, None, <PIL.Image.Image image mode=RGB size=1334x1334 at 0x7F2E7936D2D0>, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '/home/mb/stable-diffusion-webui/imgs/', '', 9, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, True, True, '/home/mb/stable-diffusion-webui/imgs/', False, 1, 'Both ▦', 'Generate masks', False, '', False) {}
Traceback (most recent call last):
  File "/home/mb/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/mb/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/mb/stable-diffusion-webui/modules/img2img.py", line 142, in img2img
    process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, args)
  File "/home/mb/stable-diffusion-webui/modules/img2img.py", line 46, in process_batch
    proc = modules.scripts.scripts_img2img.run(p, *args)
  File "/home/mb/stable-diffusion-webui/modules/scripts.py", line 337, in run
    processed = script.run(p, *script_args)
  File "/home/mb/stable-diffusion-webui/extensions/batch-face-swap/scripts/face_swap.py", line 408, in run
    image = Image.open(imgPath)
  File "/home/mb/stable-diffusion-webui/venv/lib/python3.10/site-packages/PIL/Image.py", line 3227, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'img.jpg'

Error: ZeroDivisionError: division by zero

I am getting this error:

ZeroDivisionError: division by zero

In the gui in automatic1111 below the image it show this message:

ZeroDivisionError: division by zero
Time taken: 1.31sTorch active/reserved: 4114/4236 MiB, Sys VRAM: 6642/24564 MiB (27.04%)

alpha compositing with overlay strategy produces artifacts if faces are close together

Here's an example of broken output:

image

Here's an illustration of what's going wrong with the algorithm.

This happens because of the alpha compositing strategy: after cutting holes in the original image where all the faces are, then putting the generated tiles underneath so you can "see through" the holes to the images, it turns out that if the rectangular tile for a face is too large, it can extend to be visible through the hole for another face.

In this particular case, it happens because the image is small, so the padding is large relative to the faces. It can also happen from using square tiles with faces that are taller than they are wide; this will add extra packing horizontal.

Although it is possible to tweak the settings on individual images to avoid this, it's a batch processor so it would be better if it just got it right automatically, and it's straightforward to get it right automatically: simply alpha composite each generated face with its mask, rather than cutting all the masks out of a single image.

I implemented a basic version of this myself, but it didn't interact properly with batching so it's probably better if you write it yourself.

Here's the basic compositing step that I used to replace 'apply_overlay', where paste_loc now contains the per-face mask as well as the old values:

def apply_masked_face(face, paste_loc, final_image):
    x, y, w, h, mask = paste_loc
    base_image = Image.new('RGBA', (final_image.width, final_image.height))
    face = images.resize_image(1, face, w, h)
    base_image.paste(face, (x, y))
    face = base_image
    new_mask = ImageChops.multiply(face.getchannel("A"), mask)
    face.putalpha(new_mask)
    final_image = Image.alpha_composite(final_image, face)
    return final_image

The multiply probably isn't necessary, you could just do face.putalpha(mask) instead. I also didn't know what the case being handled for paste_loc=None is, so I didn't handle that.

To do this properly, you'll have to dilate each face mask independently, rather than dilating after merging. You'll still want to build the combined-mask "overlay" image for SD inpainting to work on so that if another face is visible in the padding region, the inpainting doesn't try to converge with it.

Another advantage to this is you can increase the padding size, which means maybe the inpainting can be more coherent, for example it may be able to infer gender and age from the hair and body if the padding extends far enough.

Some ideas to improve quality

SD generates best face and likeness when its close to native res or higher and degrades faces when res is below 480 (easy to test)so...

  • Do You guys upscale the face to 512x512 or whatever resolution is set in img2img ? Even if its low res face that has like 200x200 pixels , upscaling it to 512 or 768 would improve the quality, It would be a good idea to tinker with .

  • Another thing that helps with low res images is sharpening, what i do wheni have lowres image is i upscale it until face size is about 400-500 pixels, then i do sharpening so its a bit crispier detail, and then i inpaint the face , this really makes a difference , sharpening tick box maybe with a scale of how much , like 0.2, 0.3 would be nice to automate all this so i dont have to upscale on my own before inpainting the face .

  • Even better if You would be able to detect the face and also detect if its resolution is lower than 512, and iif it is, resize it so its 512 pixels wide.

Inverted mask won't affect the result image

The mask selecting anything but the body does not apply the generation to the result image. Generating the mask via this script and using it normally with the inpaint upload works as intended.

This is the last working commit:
d729c01

I would give it a go BUT...

I tried to install it but without success ...it does not show up in the img2img with latest updates from automatic1111

Any ideas why ?

Hello, is it possible to replace the face of the specified picture? It is to input two pictures, one original picture and one selected picture, just like normal face change. I want to change it to be more real, but I have been looking at other projects recently, which are either too fake or take a long time

Hello, is it possible to replace the face of the specified picture? It is to input two pictures, one original picture and one selected picture, just like normal face change. I want to change it to be more real, but I have been looking at other projects recently, which are either too fake or take a long time

Allow us to keep the faces and batch swap the rest of the image

Hello, this is a feature request. I like to take photos of my friends and inpaint their faces into funny or creative scenes like WH40k or the North Korean Navy. Can we invert the face search results so that it inpaint everything except the faces?

Including for cases where there are multiple faces. I would expect it to fuse the individual face detection masks into a single inverted mask that covers the entire source image. Thank you for your time considering this.

Quality reduced with recent update (restore faces not working?)

hello there!

I wonder if you could help with this.
Since your recent update / new UI I've been having odd issues. (as far as I know the timing coincides anyway)

The eye quality / face quality looks very odd - it's almost as if the 'restore faces' option is no longer working

I've been testing with inpaint upload on img2img (as this worked before for a long time)

Attached example
eyes

is using DPM++ 2M Karras / 40 sampling steps and face restoration model is GFPGAN

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Automatic1111 Version: v1.3.0

  • Checkpoint used (I've tried many) - but all based on SD 1.5
  • Image Size = 512 x 512

Thanks in advance!

Keep original file name

I'm glad to see 'Keep original file name' added as an option. I like to make batches of 2-4 of each image and choose the best one, and with 'Keep original file name' enabled it produces one image with 2-4 pictures on it. (Which is cool and probably useful for something) But would it be possible to output 'original file name-001' 'original file name-002' 'original file name-003'?

EDIT: Actually, with 'Keep original file name' change BFS now produces a single image with any batch size (all results in one image), regardless if 'Keep original file name' is enabled or not. Hope that makes sense...

ModuleNotFoundError: No module named 'mediapipe'

installed via extensions tab, on restart UI console displays error. script does not appear in dropdown list. using directml auto1111 on AMD.

Error loading script: batch_face_swap.py
Traceback (most recent call last):
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\batch_face_swap.py", line 8, in
from bfs_utils import *
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\bfs_utils.py", line 6, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'

Error loading script: bfs_utils.py
Traceback (most recent call last):
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\bfs_utils.py", line 6, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'

Error loading script: face_detect.py
Traceback (most recent call last):
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "T:\stablediffusion\auto111\stable-diffusion-webui-directml\extensions\batch-face-swap\scripts\face_detect.py", line 8, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'

rotate faces to vertical when running SD

One idea in #26 is to rotate the faces so SD proceses a vertical face (since it makes worse faces at other angles), then rotate it back when compositing with the main image.

This seems like a good idea if it can be done easily. Since we use FaceMesh to detect faces, it might be possible to use the info we already get from it to figure out the 3D orientation of the face, figure out the best 2D orientation, and then do the above--but only if we can detect the rotated face in the first place.

M1 mediapipe

Please add support of MacOS M1 detection and "mediapipe-silicon". Thank you!

'Only Masked' not working with update?

The results appear the same with either 'Whole Picture' or 'Only Masked' since updating from about a week ago. Adjusting the padding and disabling the override for padding did not change the results.

[Feature Request] Roop Funtionality

Hi there @kex0 ,
I know I have made something like a feature request for the roop pr,but since you have this beautiful extension which I use all the time,is there any way of implementing inswapper model for a guided face swap mode,with some functionality of upscaling with gfpgan/codeformer?(maybe restore face option or another type of piping automatically-like using the extras tab with some sliders for gfpgan and codeformer to be used together). Or maybe we can redraw the face for higher resolution with SD models,something like inpainting models for enhancement,eliminating the pixelation caused by low res of 128 * 128 . Inpainting models have the ability use inpainting conditioning mask strength and they are very good with cfg scale thresholding with some really high cfg values,so low steps can be used with dpm ++ karras samplers.

I am hoping that there is a possibility for this. This can be used as a batch face swapping in the platform of Automatic which is very robost.,but with a different type of approach.

Ahh btw I am pretty sure that you are aware of the Midjourney is using something like this,but I don't know how they implemented it since it is not open source.

The developer of Unprompted extension, may contribute if he has time and interest(and ofc You as well). Can not say he will but he may. https://github.com/ThereforeGames/unprompted @ThereforeGames

Update : https://github.com/Ynn/sd-webui-faceswap just came out. I think this closes the request.

face mask

Sometimes the range of face detection is not accurate enough, can I provide a face mask by drawing on the UI?

Not working with Controlnet

Batch face changing does not work with Controlnet. For example, when there are open and closed eye avatars in the batch folder, and when Controlnet is enabled and Mediapipe is selected, all generated avatars are open.

Issue with using Style 1

Thanks kex0, this is a huge time saver! It works great until I try to use a style I've saved in 'Style 1'. If I do that each image takes progressively more and more time to process. Image 1 will take 2 seconds but the 10th image will take 60 seconds, the 20th will take 5 minutes and the 70th will take 5 hours. Do you have any idea what would cause this?

[BUG] "Existing masks" feature not working

Trying to use predefined masks from "Masks directory"

image image
Will process 1 images, generating 1 new images for each.
Error running process: /Users/user/Projects/ai/stable-diffusion-webui/extensions/batch-face-swap/scripts/batch_face_swap.py
Traceback (most recent call last):
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/scripts.py", line 451, in process
    script.process(p, *script_args)
  File "/Users/user/Projects/ai/stable-diffusion-webui/extensions/batch-face-swap/scripts/batch_face_swap.py", line 965, in process
    finishedImages = generateImages(p, facecfg, input_image, input_path, searchSubdir, viewResults, int(divider), howSplit, saveMask, output_path, saveToOriginalFolder, onlyMask, saveNoFace, overridePrompt, bfs_prompt, bfs_nprompt, overrideSampler, sd_sampler, overrideModel, sd_model, overrideDenoising, denoising_strength, overrideMaskBlur, mask_blur, overridePadding, inpaint_full_res_padding, overrideSeed, overrideSteps, steps, overrideCfgScale, cfg_scale, overrideSize, bfs_width, bfs_height, invertMask, singleMaskPerImage, countFaces, maskWidth, maskHeight, keepOriginalName, pathExisting, pathMasksExisting, output_pathExisting, selectedTab, mainTab, loadGenParams, rotation_threshold)
  File "/Users/user/Projects/ai/stable-diffusion-webui/extensions/batch-face-swap/scripts/batch_face_swap.py", line 546, in generateImages
    finishedImages = faceSwap(p, masks, image, finishedImages, invertMask, forced_filename, output_pathExisting, info, selectedTab, mainTab, faces_info, rotation_threshold, overridePrompt, bfs_prompt, bfs_nprompt, overrideSampler, sd_sampler, overrideModel, sd_model, overrideDenoising, denoising_strength, overrideMaskBlur, mask_blur, overridePadding, inpaint_full_res_padding, overrideSeed, overrideSteps, steps, overrideCfgScale, cfg_scale, overrideSize, bfs_width, bfs_height,)
UnboundLocalError: local variable 'faces_info' referenced before assignment

Error completing request
Arguments: ('task(iura7z4o8dn4247)', 4, 'A a portrait person', 'cartoon, 3d, ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)), ((extra limbs)), ((b&w)), weird colors, blurry', [], None, None, None, None, None, None, None, 100, 0, 4, 0, 1, False, False, 1, 1, 2, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, True, 'img2img', False, '', '', False, 'Euler a', False, 'realistic-vision.safetensors [f1d0443cbb]', True, 0.5, True, 4, True, 32, False, False, 30, False, 6, False, 512, 512, '/Users/user/Projects/video/faces/test/images', False, 1, 'Both ▦', False, '', False, True, True, False, False, False, False, 110, 120, False, '/Users/user/Projects/video/faces/test/images', '/Users/user/Projects/video/faces/test/masks', '', 'existingMasksTab', 4, 4, 2.5, 30, 1.03, 1, 1, 5, 0.5, 5, False, True, True, 20, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/img2img.py", line 178, in img2img
    processed = process_images(p)
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/processing.py", line 670, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/processing.py", line 1184, in init
    image = images.flatten(img, opts.img2img_background_color)
  File "/Users/user/Projects/ai/stable-diffusion-webui/modules/images.py", line 710, in flatten
    if img.mode == "RGBA":
AttributeError: 'NoneType' object has no attribute 'mode'

@kex0 any suggestions?

Import existing masks for batch process

Since afaik there is no way to do a batch job inpainting with a set of paired images and mask images, and since this first makes the mask then runs img2img, would it be possible to bypass the generation and provide a masks folder and an images folder, with files named the same, and then have this batch process them?

Issues with batch count >1 - first pic has lots of images squashed into one

hello there!

I wonder if you could help with this.

Since your recent update / new UI I've been having some odd issues. (as far as I know the timing coincides anyway)

In this case -the first image generated is a grid within the face (same amount of images within the face as that batch number)
gridface

in the example above the batch count was 4 so the first image will have a grid of 4 faces placed within.

I do have 'Show previews of all images generated in a batch as a grid' enabled in settings but this was never an issue before.
I even tried turning that off, but still get the same issue

I've been testing with inpaint upload on img2img (as this worked before for a long time)

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Automatic1111 Version: v1.3.0

  • Checkpoint used (I've tried many) - but all based on SD 1.5
  • Image Size = 512 x 512

Thanks in advance!

Broke my UI

I installed it and now my UI only generates images that look like vomit.
image
Worked fine literally right before with the exact same settings.

add a way to automatically select a limited subset of faces to swap

Sometimes you might want to do something like swap a particular face into a bunch of images, so the prompt gets specific about who it is. In this case, you probably don't want to face swap all the faces in the image, only the most prominent one (unless you're trying to make quintuplets or whatever).

But maybe the background faces are all clones, so you still want to face swap all the background faces as well, but with a different prompt. While you could investigate an implementation that has two prompts that it chooses between for different subsets of faces, for now a simpler approach is to just limit which faces get swapped, and you can run through the batch twice: the first time replacing the most prominent face, the second time the background faces.

To do this, I propose adding a dropdown to the UI that has these choices:

"Swap all faces",
"Swap only largest face",
"Swap all faces except largest",
"Swap only faces roughly in the foreground",
"Swap only faces not in the foreground",
  • "All faces" is the current behavior.
  • "Only largest face" picks whichever face has the largest area (or really, its bounding box has largest area).
  • "All faces except largest" discards just the face from the previous entry
  • "...in the foreground" would just do faces found by FaceMesh, at least at first, we can try to get more clever down the line
  • "...not in the foreground" would do all the faces that "...in the foreground" doesn't do

"Largest face" is really a stand in for "closest face", since you don't know actual distances it's the best you can do. Also, there might be cases where the subject of the image is in the center, but there's a "background person" at the edge of the image whose face is slightly larger but they're clearly in the background, so maybe instead of "largest face" you need to pick "largest face that's towards the middle of the image" or something like that. Maybe call that "most prominent face". You can add that later though.

image size isn't always preserved properly

It wrote out the image at 512x512, which was also the tile size and which I assume is not a coincidence, but I don't know the cause. If I wan't in the middle of face detection stuff I'd track it down myself.

Note this causes weird stretching of the image. Only happened on some images and not others. Does happen when running this image by itself.

This wasn't happening with the old version from before the compositing change, and I verified this is happening on the current code without my modifications.

Input:
00484-916853272- the convention floor  at comiccon_0 1  __28yo women with innocent and gorgeous faces and perfect skin  (on the convention floor

Output:
00030--1 0-

(I know this is a terrible image, I had bad hires fix settings when I made the original. It's part of my face detection test set because of the blurry faces in the background. And the face swapped version was just a quick test that ran for 5 steps.)

Problem with masksize blur calculation when mask size is enlarged, border is visible

Guys does it take into consideration the mask size when we set it to +10 or -10 in faceswap panel ? I dont think so

In the code its p.mask_blur = int(math.ceil(0.01*height))

And when i set it to 10 then i can see straight border were the image ends abruptly
I think the blur should make the seam invisible but i dont think theres code for that currently?
So to the issue hapenns when you use default settings but set mask size to 10 and you get this
image

image

image

You can see the border of the mask ending abruptly, maybe extra code to fade that out would solve the issue ?
The reason i make the mask bigger is the haircut and hairline, on default 0 the mask border is still visible, i think updated mask blur code should fix this all
There is masking.py in modules folder, i think taking a look how native code blurs the borders would help as well.

Add option to prefix files with original file name

Could there be an option to prefix files with the original file name? Like the "Use original name for output filename during batch process in extras tab" option for the extras tab?

The extension works really well and I'm trying to use it to face swap all the faces in an mp4. But since the image files are processed in a non-sequential order, they end up out of order and I can't put them back together as an mp4 again.

Thanks!

TODO List

  • Adjust "Only masked padding, pixels" and "Mask blur" based on face contour area #51
  • Add ControlNet functionality #54, #42
  • Add Wildcards functionality #47
  • Add option to enable overlapping image subdivision IMG

thanks

not an issue but I don't know how else to say THANKS for taking the time to code (and share) this!

ValueError: not enough values to unpack (expected 3, got 2)

Traceback (most recent call last):
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 399, in run
processed = script.run(p, *script_args)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\batch_face_swap.py", line 771, in run
finishedImages = generateImages(p, facecfg, path, searchSubdir, viewResults, int(divider), howSplit, saveMask, pathToSave, saveToOriginalFolder, onlyMask, saveNoFace, overrideDenoising, overrideMaskBlur, invertMask, singleMaskPerImage, countFaces, maskSize, keepOriginalName, pathExisting, pathMasksExisting, pathToSaveExisting, selectedTab, loadGenParams, rotation_threshold)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\batch_face_swap.py", line 386, in generateImages
masks, totalNumberOfFaces, faces_info, skip = findFaces(facecfg, image, width, height, divider, onlyHorizontal, onlyVertical, file, totalNumberOfFaces, singleMaskPerImage, countFaces, maskSize, skip)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\batch_face_swap.py", line 72, in findFaces
landmarkHull, face_info = getFacialLandmarkConvexHull(image, rect, onlyHorizontal, divider, small_width, small_height, small_image_index, facecfg)
File "D:\Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\face_detect.py", line 137, in getFacialLandmarkConvexHull
height, width, channels = image.shape
ValueError: not enough values to unpack (expected 3, got 2)

batch feature request?

i was hoping to run a couple thousand images true this as the extension implies it handles batch processing but it seems to only allow 1 image at a time. could you add a batch process feature to this please?

Incompatibility with vladmandic / automatic

Tried installing on the hot new Automatic1111 spinoff, https://github.com/vladmandic/automatic
but got an error

ERROR Calling script: H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py/ui: AttributeError
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ H:\automatic\modules\scripts.py:245 in wrap_call │
│ │
│ 244 │ try: │
│ ❱ 245 │ │ res = func(*args, **kwargs) │
│ 246 │ │ return res │
│ │
│ H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py:749 in ui │
│ │
│ 748 │ │ │ │ │ │ │ │ │ │ │ available_models = modules.sd_models.checkpo │
│ ❱ 749 │ │ │ │ │ │ │ │ │ │ │ sd_model = gr.Dropdown(label="SD Model", cho │
│ 750 │ │ │ │ │ │ │ │ │ │ │ modules.ui.create_refresh_button(sd_model, m │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'
16:08:50-736468 ERROR Calling script: H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py/ui: AttributeError
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ H:\automatic\modules\scripts.py:245 in wrap_call │
│ │
│ 244 │ try: │
│ ❱ 245 │ │ res = func(*args, **kwargs) │
│ 246 │ │ return res │
│ │
│ H:\automatic\extensions\batch-face-swap\scripts\batch_face_swap.py:749 in ui │
│ │
│ 748 │ │ │ │ │ │ │ │ │ │ │ available_models = modules.sd_models.checkpo │
│ ❱ 749 │ │ │ │ │ │ │ │ │ │ │ sd_model = gr.Dropdown(label="SD Model", cho │
│ 750 │ │ │ │ │ │ │ │ │ │ │ modules.ui.create_refresh_button(sd_model, m │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'

ModuleNotFoundError: No module named 'mediapipe'

SOLVED!
pip install mediapipe

win 11: how install mediapipe?

Error loading script: face_swap.py
Traceback (most recent call last):
File "F:\stable-diffusion-portable-main\modules\scripts.py", line 205, in load_scripts
module = script_loading.load_module(scriptfile.path)
File "F:\stable-diffusion-portable-main\modules\script_loading.py", line 13, in load_module
exec(compiled, module.dict)
File "F:\stable-diffusion-portable-main\extensions\batch-face-swap\scripts\face_swap.py", line 10, in
import mediapipe as mp
ModuleNotFoundError: No module named 'mediapipe'

Without GUI?

Can you make this work.from.the command lol be, without the GUI?

Extention currupts metadata

Using batch swap with dynamic prompt. All images contains the same prompt in metadata even if generation was made from correct one (csv output still working properly)

Looks like the new version plugin is not compatible with "xformers"

python: 3.10.6  •  torch: 2.0.0+cu118  •  xformers: 0.0.19  •  gradio: 3.28.1
Microsoft Edge

fc79eef

[--opt-sdp-attention is OK.]

Total progress: 22it [02:29, 3.52s/it]
Will process 1 images, generating 1 new images for each.
Found 1 face(s) in <PIL.Image.Image image mode=RGB size=944x944 at 0x1A77AB4C9A0>
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00, 10.01it/s]
Found 1 faces in 1 images in 0.953125 seconds.
Error completing request
Arguments: ('task(f4zd02y7lcvqlui)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=944x944 at 0x1A77AB4F070>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, True, 'img2img', False, '', '', False, 'Euler a', False, '2339HalfTheFishWas_v10.safetensors [7bf8da1368]', True, 0.5, True, 4, True, 32, '', False, 1, 'Both ▦', False, '', False, True, True, False, False, False, False, 1, False, '', '', '', 'generateMasksTab', 1, 4, 2.5, 30, 1.03, 1, 1, 5, 0.5, 5, False, True, False, 20, False, 'MultiDiffusion', False, 10, 1, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, True, True, True, False, 1536, 96, <controlnet.py.UiControlNetUnit object at 0x000001A77AD5A0E0>, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 1.6, 0.97, 0.4, 0.15, 20, 0, 0, '', False, False, False, None, False, 50, 'dynamic_thresholding;dynamic_prompting', True, True, True, 2, 'Original', 32, 5, 512, 512, 0.1, True, True, 'Inner', 'Original', 'sam_vit_b_01ec64.pth', 'groundingdino_swint_ogc.pth', True, 32, 4, '', '', '', '', '', '', 0.4, 0.4, 0, 0, 0, 0, True, True, 16, 16, 'Text', 'Center', None, 100, 100, '', None, 'Trebuchet MS', 50, 10, 0.4, '') {}
Traceback (most recent call last):
File "D:\StabilityAI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\StabilityAI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\modules\img2img.py", line 181, in img2img
processed = process_images(p)
File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "D:\StabilityAI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 604, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 1084, in init
self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image))
File "D:\StabilityAI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\StabilityAI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
return self.first_stage_model.encode(x)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
h = self.encoder(x)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 536, in forward
h = self.mid.attn_1(h)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 258, in forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 192, in memory_efficient_attention
return memory_efficient_attention(
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 290, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 310, in memory_efficient_attention_forward
out, *
= op.apply(inp, needs_gradient=False)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
out, lse, rng_seed, rng_offset = cls.OPERATOR(
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Suggestion

Works very well and i did that a lot manualy to improve weaker likeness of me but i wish we coud pause it and do other stuff so...

1 . Would be nice to be able to pause it , go back to previous file and run faceswap again on previous image if it did not came out that good.
2.Ability to have blur thats extending the mask so jawline also gets replaced and a bit of hairline too
3.Aligning faces especially rotated ones can help a lot cause it doesnt work too weel when face is rotated too much, worst best when face is in normal standing position without any rotation, so aligning it should help a lot.
4.When it detects multiple faces then swap each face in separate process instead of two faces at once cause it degrades likeness.

Thats pretty much it, also i would like to pause just to change settings and go on to next file in folder with adjusted denoise or cfg etc... it would be useful if i see that current settings dont work that great and i want to keep running it without starting over from firest file.

FileNotFoundError: [WinError 3] The system cannot find the path specified: ''

I updated batch-face-swap to 8ca92d8 and also restarted my entire automatic1111 stable diffusion process, but I'm still getting the same error as earlier today:

Traceback (most recent call last):
  File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
    result = await self.call_function(
  File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\+++Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\+++Stable Diffusion\stable-diffusion-webui\extensions\batch-face-swap\scripts\face_swap.py", line 207, in generateMasks
    allFiles = [os.path.join(path, f) for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
FileNotFoundError: [WinError 3] The system cannot find the path specified: ''

convert frames to video

It would be nice to generate a video with the generated frames in the output folder after batch.

I do it with the following code, the frames are arranged in chronological order:
each image is 0001.jpg 0002.jpg etc

import cv2
import numpy as np
import glob
size = (55,55)
img_array = []
for filename in sorted(glob.glob('/out_frames/2023-05-18/*')):
    
    img = cv2.imread(filename)
    height, width, layers = img.shape
    size = (width,height)
    img_array.append(img)
    
out = cv2.VideoWriter('project2.avi',cv2.VideoWriter_fourcc(*'DIVX'), 15, size)

for i in range(len(img_array)):
    out.write(img_array[i])
out.release()

however I must close the webui, run the code and reopen the webui.
It would be better to generate the video at the end of all the frames.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.