comfystream is a package for running img2img Comfy workflows on video streams.
This repo also includes a WebRTC server and UI that uses comfystream to support streaming from a webcam and processing the stream with a workflow JSON file (API format) created in ComfyUI. If you have an existing ComfyUI installation, the same custom nodes used to create the workflow in ComfyUI will be re-used when processing the video stream.
Prerequisites
A separate environment can be used to avoid any dependency issues with an existing ComfyUI installation.
Create the environment:
conda create -n comfystream python=3.11
Activate the environment:
conda activate comfystream
Make sure you have PyTorch installed.
Install comfystream
:
pip install git+https://github.com/yondonfu/comfystream.git
# This can be used to install from a local repo
# pip install .
# This can be used to install from a local repo in edit mode
# pip install -e .
tensor_utils
Copy the tensor_utils
nodes into the custom_nodes
folder of your ComfyUI workspace:
cp -r nodes/tensor_utils custom_nodes
For example, if you ComfyUI workspace is under /home/user/ComfyUI
:
cp -r nodes/tensor_utils /home/user/ComfyUI/custom_nodes
See example.py
.
Install dependencies:
pip install -r requirements.txt
If you have existing custom nodes in your ComfyUI workspace, you will need to install their requirements in your current environment:
python install.py --workspace <COMFY_WORKSPACE>
Run the server:
python server/app.py --workspace <COMFY_WORKSPACE>
Show additional options for configuring the server:
python server/app.py -h
Remote Setup
A local server should connect with a local UI out-of-the-box. It is also possible to run a local UI and connect with a remote server, but there may be additional dependencies.
In order for the remote server to connect with another peer (i.e. a browser) without any additional dependencies you will need to allow inbound/outbound UDP traffic on ports 1024-65535 (source).
If you only have a subset of those UDP ports available, you can use the --media-ports
flag to specify a comma delimited list of ports to use:
python server/app.py --workspace <COMFY_WORKSPACE> --media-ports 1024,1025,...
If you are running the server in a restrictive network environment where this is not possible, you will need to use a TURN server.
At the moment, the server supports using Twilio's TURN servers (although it is easy to make the update to support arbitrary TURN servers):
- Sign up for a Twilio account.
- Copy the Account SID and Auth Token from https://console.twilio.com/.
- Set the
TWILIO_ACCOUNT_SID
andTWILIO_AUTH_TOKEN
environment variables.
export TWILIO_ACCOUNT_SID=...
export TWILIO_AUTH_TOKEN=...
Prerequisities
Install dependencies
cd ui
npm install --legacy-peer-deps
Run local dev server:
npm run dev
By default the app will be available at http://localhost:3000.
The Stream URL is the URL of the server which defaults to http://127.0.0.1:8888.
At the moment, a workflow must fufill the following requirements:
- Single input using the LoadImage node
- At runtime, this node is replaced with a LoadTensor node
- Single output using a PreviewImage or SaveImage node
- At runtime, this node is replaced with a SaveTensor node
This project has been tested locally successfully with the following setup:
- OS: Ubuntu
- GPU: Nvidia RTX 4090
- Driver: 550.127.05
- CUDA: 12.5
- torch: 2.5.1+cu121