Skip to content

YuchuanHuaying/ComfyUI-LivePortraitKJ

 
 

Repository files navigation

ComfyUI nodes to use LivePortrait

Update 2

Added another alternative face detector: https://github.com/1adrianb/face-alignment

image

As this can use blazeface back camera model (or SFD), it's far better for smaller faces than MediaPipe, that only can use the blazeface short -model. The warmup on the first run when using this can take a long time, but subsequent runs are quick.

Example detection using the blazeface_back_camera:

AnimateDiff_00004.34.mp4

Update

Rework of almost the whole thing that's been in develop is now merged into main, this means old workflows will not work, but everything should be faster and there's lots of new features. For legacy purposes the old main branch is moved to the legacy -branch

Changes

  • Added MediaPipe as alternative to Insightface, everything should now be covered under MIT and Apache-2.0 licenses when using it.
  • Proper Vid2vid including smoothing algorhitm (thanks @melMass)
  • Improved speed and efficiency, allows for near realtime view even in Comfy (~80-100ms delay)
  • Restructured nodes for more options
  • Auto skipping frames with no face detected
  • Numerous other things I have forgotten about at this point, it's been a lot
  • Better Mac support on MPS (thanks @Grant-CP

update to this update:

  • converted the landmark runner onnx model to torch model, not something I have done before and I didn't manage to do anything but make it .pth file, so you'll just have to trust me on it. This allows running all this without even having onnxruntime, it runs on GPU and is about just as fast. It's available on the MediaPipe cropper node as option: When selected it's automatically downloaded from here: https://huggingface.co/Kijai/LivePortrait_safetensors/blob/main/landmark_model.pth

image

Examples:

Realtime with webcam feed:

liveportrait_realtime.mp4

Image2vid:

liveportrait_img2vid.mp4

Vid2Vid:

liveportrait_vid2vid.mp4

I have converted all the pickle files to safetensors: https://huggingface.co/Kijai/LivePortrait_safetensors/tree/main

They go here (and are automatically downloaded if the folder is not present) ComfyUI/models/liveportrait

Face detectors

You can either use the original default Insightface, or Google's MediaPipe.

Biggest difference is the license: Insightface is strictly for NON-COMMERCIAL use. MediaPipe is a bit worse at detection, and can't run on GPU in Windows, though it's much faster on CPU compared to Insightface

Insightface is not automatically installed, if you wish to use it follow these instructions: If you have a working compile environment, installing it can be as easy as:

pip install insightface

or for the portable version, in the ComfyUI_windows_portable -folder:

python_embeded/python.exe -m pip install insightface

If this fails (and it's likely), you can check the Troubleshooting part of the reactor node for alternative:

https://github.com/Gourieff/comfyui-reactor-node

For insightface model, extract this to ComfyUI/models/insightface/buffalo_l:

https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip

Please note that insightface license is non-commercial in nature.

About

ComfyUI nodes for LivePortrait

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%