Files
ComfyUI/custom_nodes/ComfyUI-PuLID-Flux/README.md
jaidaken f09734b0ee
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Has been cancelled
Execution Tests / test (macos-latest) (push) Has been cancelled
Execution Tests / test (ubuntu-latest) (push) Has been cancelled
Execution Tests / test (windows-latest) (push) Has been cancelled
Test server launches without errors / test (push) Has been cancelled
Unit Tests / test (macos-latest) (push) Has been cancelled
Unit Tests / test (ubuntu-latest) (push) Has been cancelled
Unit Tests / test (windows-2022) (push) Has been cancelled
Add custom nodes, Civitai loras (LFS), and vast.ai setup script
Includes 30 custom nodes committed directly, 7 Civitai-exclusive
loras stored via Git LFS, and a setup script that installs all
dependencies and downloads HuggingFace-hosted models on vast.ai.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 00:56:42 +00:00

4.1 KiB
Raw Blame History

PuLID-Flux for ComfyUI

PuLID-Flux ComfyUI implementation (Alpha version)

pulid_flux_einstein

🆕 Version Updates

  • V0.1.0: Working node with weight, start_at, end_at support (attn_mask not working)

Notes

This project was heavily inspired by cubiq/PuLID_ComfyUI. It is just a prototype that uses some convenient model hacks for the encoder section. I wanted to test the models quality before reimplementing it in a more formal manner. For better results I recommend the 16bit or 8bit GGUF model version of Flux1-dev (the 8e5m2 returns blurry backgrounds). In the examples directory you'll find some basic workflows.

Supported Flux models:

32bit/16bit (~22GB VRAM): model, encoder
8bit gguf (~12GB VRAM): model, encoder
8 bit FP8 e5m2 (~12GB VRAM): model, encoder
8 bit FP8 e4m3fn (~12GB VRAM): model, encoder
Clip and VAE (for all models): clip, vae

For GGUF models you will need to install ComfyUI-GGUF

Installation

  • Install this repo into ComfyUI/custom_nodes
git clone https://github.com/balazik/ComfyUI-PuLID-Flux.git
  • Install all the packages listed in the requirements.txt file into the Python environment where you run ComfyUI. I prefer not to use automatic installation scripts, as I dislike when scripts install software without my knowledge. 😉

  • You need one of the mentioned Flux.1-dev models. Download the model into ComfyUI/models/unet, clip and encoder into ComfyUI/models/clip, VAE into ComfyUI/models/vae.

  • PuLID Flux pre-trained model goes in ComfyUI/models/pulid/.

  • The EVA CLIP is EVA02-CLIP-L-14-336, should be downloaded automatically (will be located in the huggingface directory). If for some reason the auto-download fails (and you get face_analysis.py, init assert 'detection' in self.models exception), download this EVA-CLIP model manually, put the file to your ComfyUI/models/clipand restart ComfyUI.

  • facexlib dependency needs to be installed, the models are downloaded at first use.

  • Finally you need InsightFace with AntelopeV2, the unzipped models should be placed in ComfyUI/models/insightface/models/antelopev2.

Known issues

  • ApplyPulidFlux doesn't work on HW with CUDA compute < v8.0, (when Flux FP8 it needs bfloat16).
  • When the ApplyPulidFlux node is disconnected after first run, the Flux model is still influenced by the node.
  • ApplyPulidFlux attn_mask is not working (in progress).

Credits

ComfyUI/ComfyUI - A powerful and modular stable diffusion GUI.

PuLID for Flux - tuning-free ID customization solution for FLUX.1-dev

cubiq PuLID_ComfyUI - PuLID ComfyUI native implementation (Thanks for the awesome work what you do Matteo 😉 ).