Addons for ComfyUI
To install all the nodes, simply put the entire bsz-nodes
folder inside your custom_nodes
folder in ComfyUI
To install specific nodes, you may put individual .py
files from bsz-nodes
directly into the ComfyUI custom_nodes
folder.
__init__.py
simply forwards all nodes within its folder to ComfyUI, and is not necessary if you're putting nodes directly into custom_nodes
Contains 3 nodes each with a different means to the same end result. These nodes are designed to automatically calculate the appropriate latent sizes when performing a "Hi Res Fix" style workflow.
- Input
base_model_res
: Resolution of base model being used. SD 1.5 ≅ 512, SD 2.1 ≅ 768, SDXL ≅ 1024
- Output
Lo Res Width
: Width intended to be used for first/low res passLo Res Height
: Height intended to be used for first/low res passHi Res Width
: Width intended to be used for final/high res passHi Res Height
: Height intended to be used for final/high res pass
- Input
desired_width
: Width in pixels for final/high res pass.desired_height
: Height in pixels for final/high res pass.
- Input
desired_aspect_x
: Horizontal aspect.desired_aspect_Y
: Vertical aspect.scale
: Hi Res horizontal and vertical scale over Lo Res sizes. Note that because this scales both axes, a scale of2.0
will actually quadruple the amount of pixels in an image, so use with care.
A unique node that functions both as BSZAbsoluteHires and BSZAspectHires with a convenient toggle
- Input
use_aspect_scale
: Use aspect & scale inputs instead of desired width/height inputs
All-in-one solution for SDXL text2img, img2img and scaling/hi res fix. Essentially the sdxl and sdxl-upscale workflows both in one node. Do note that while this node shouldn't be any slower than the regular workflow, due to ComfyUI caching latent results per-node, even changing just a refiner setting on this node will result in sampling starting over from the first base pass. There are at least some minimal internal optimizations to skip passes that aren't needed.
Scaling works by running initial scaling passes before running the final pass at target
size
Input fields
base_model
: Model from base checkpointbase_clip
: CLIP from base checkpointlatent_image
: Latent image to start fromrefiner_model
: Model from refiner checkpoint. Optionalrefiner_clip
: CLIP from refiner checkpoint. Optionalpixel_scale_vae
: VAE used for pixel scaling methods. Optional, only needed if they're being usedpositive_prompt_G
: Positive prompt for base CLIP G and refinerpositive_prompt_L
: Positive prompt for base CLIP L. Usually either set to the same as CLIP G but sometimes is used for supporting termsnegative_prompt
: Negative promptsteps
: Steps for non-scaled passdenoise
: Denoise amount for latent inputcfg
: CFG scalerefiner_amount
: Refiner to base ratio. Requires refiner model and refiner clip to functionrefiner_ascore_positive
: Refiner aesthetic score for positive promptrefiner_ascore_negative
: Refiner aesthetic score for negative promptsampler
: Samplerscheduler
: Schedulerscale_method
: If set, will scale image to match target sizes using the provided algorithmscale_target_width
: Ifscale_method
is enabled, image will be resized to thisscale_target_height
: Ifscale_method
is enabled, image will be resized to thisscale_denoise
: Denoise amount for scaled passesscale_steps
: Steps for scaled passesscale_iterations
: Amount of scaling passes to run. Experimental and very expensivevae_tile
: Whether to used tiled vae during pixel scalingseed
: Seedy.
Recommended settings for various workflows...
- Text2Image: default
- Text2Image w/latent upscale:
scale_method
:latent bicubic
- Text2Image w/pixel upscale:
scale_method
:pixel bicubic
scale_denoise
:0.15
vae_tile
:encode
if scaling to a very large resolution
- Img2Img w/upscale:
- Same as text2img upscaling
steps
:0
Node that loads the Pixelbuster library. Requires either pixelbuster.dll
(windows) or libpixelbuster.so
(linux) to be placed in your custom_nodes
folder alongside bsz-pixelbuster.py
Input fields
image
: Image[s] to work oncode
: Pixelbuster code. See the help for reference
Personal flair of the SDXL "partial diffusion" workflow. Minimalist node setup with defaults balanced approach to speed/quality
bsz-auto-hires.py
: While this workflow doesn't actually perform any upscaling, it still uses theBSZAutoHiresCombined
node for quick aspect ratio changing and easy CLIP detail target adjustments
Personal flair of the SDXL "partial diffusion" workflow with added "High res fix". Slightly prioritizes speed as far as upscaling is concerned.
bsz-auto-hires.py
: Workflow is painful without it.
Demonstration of the bsz-principled-sdxl node
bsz-auto-hires.py
: Principled can use hi res sizesbsz-principled-sdxl.py
: Yes.
Question | Answer |
---|---|
Why is there a separate VAE loader instead of using the VAE directly from the main checkpoint? | I personally find it desireable to have the VAE decoupled from the checkpoint so you can change it without re-baking the models. If this isn't desirable to you yourself, simply remove the Load VAE node and reconnect the traces into the main Load Checkpoint node instead. |
Why are the KSAmpler nodes so long? | To show live previews of each stage. I strongly recommend you do the same by launching ComfyUI with --preview-method latent2rgb or similar. |
Why is this setting the default instead of that setting? | It just happens to look better on my benchmark images. If you think it's objectively wrong, open an issue with a compelling case on why it should be changed. |
You should add the refiner 1 step detail trick | No. That "trick" really just causes the refiner to interpret latent noise as "details" it should refine, which hurts the overall image quality. If you render an image and really really think it needs it, just load the most recent history item and adjust the refiner steps as needed. ComfyUI caches the previous latents so you won't have to re-render the whole image, just the part that changed. |