|
| 1 | +--- |
| 2 | +title: "Develop with ComfyUI-Launcher" |
| 3 | +description: "ComfyUI-Launcher Development Guide" |
| 4 | +icon: "code" |
| 5 | +--- |
| 6 | +[ComfyUI-Launcher](https://github.com/ComfyWorkflows/ComfyUI-Launcher) allows you to easily launch multiple ComfyUI workspaces. Following [RunPod.md](https://github.com/ComfyWorkflows/ComfyUI-Launcher/blob/main/cloud/RUNPOD.md), we can deploy ComfyUI-Launcher to RunPod. |
| 7 | + |
| 8 | +You can monitor the container logs through Runpod, which is very helpful for debugging issues as you install nodes. In my experience, hacking on ComfyUI workflows require a decent amount of debugging around installing nodes. |
| 9 | + |
| 10 | +I also recommend setting up SSH public keys, so you can directly ssh into the GPU server. This will help us with installing custom nodes, and setting up ssh tunnels. |
| 11 | + |
| 12 | +ComfyUI-Launcher will create two folders: |
| 13 | +- `/workspace/comfyui_launcher_models` - where all the models will be stored |
| 14 | +- `/workspace/comfyui_launcher_projects` - where all the ComfyUI workspaces will be stored |
| 15 | + |
| 16 | +You can create multiple ComfyUI workspaces, and they will live in `/workspace/comfyui_launcher_projects/{workspace}/comfyui` |
| 17 | +<Note> |
| 18 | +Ignore the `/workspace/comfyui_launcher_projects/{workspace}/comfyui/models` directory - all of your models will be in `/workspace/comfyui_launcher_models` |
| 19 | +</Note> |
| 20 | +When you create an “empty” workflow, you will see the new workspace with the same name. Note, you have to install a model in the ComfyUI manager GUI so the “Load Checkpoint” node can find a model. |
| 21 | + |
| 22 | +Enjoy generating your first purple bottle image. |
| 23 | + |
| 24 | +## Step 2: Install ComfyStream |
| 25 | + |
| 26 | +First, we install Conda |
| 27 | +```bash |
| 28 | +cd /workspace |
| 29 | +wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh |
| 30 | +./Miniconda3-latest-Linux-x86_64.sh |
| 31 | +export CONDA_PATH=$HOME/miniconda3 |
| 32 | +eval "$($CONDA_PATH/bin/conda shell.bash hook)" |
| 33 | +``` |
| 34 | +Verify Conda has been installed using `which conda` - you should see “/root/miniconda3/bin/conda” |
| 35 | + |
| 36 | +```bash |
| 37 | +export CONDA_PATH=$HOME/miniconda3 |
| 38 | +eval "$($CONDA_PATH/bin/conda shell.bash hook)" |
| 39 | +``` |
| 40 | + |
| 41 | +Next, we create and activate a Conda environment |
| 42 | +```bash |
| 43 | +conda create -n comfystream python=3.11 |
| 44 | +conda activate comfystream |
| 45 | +conda install pytorch torchvision -c pytorch |
| 46 | +pip install twilio aiortc torchaudio scikit-image && pip install . |
| 47 | +``` |
| 48 | + |
| 49 | +Next, we install ComfyStream |
| 50 | +```bash |
| 51 | +cd /workspace |
| 52 | +git clone https://github.com/yondonfu/comfystream.git |
| 53 | +cd comfystream |
| 54 | +pip install . |
| 55 | +``` |
| 56 | + |
| 57 | +After that, we copy the `tensor_utils` nodes into the `custom_nodes` folder in the ComfyUI workspace |
| 58 | + |
| 59 | +```bash |
| 60 | +cp -r nodes/tensor_utils /workspace/comfyui_launcher_projects/{workspace}/comfyui/custom_nodes |
| 61 | +``` |
| 62 | + |
| 63 | +## Step 3: Install DepthAnything Tensorrt Node |
| 64 | + |
| 65 | +The `https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt` custom node generates depth maps from images. Installing the node is pretty manual. |
| 66 | + |
| 67 | +**Installation** |
| 68 | +```bash |
| 69 | +cd /workspace/comfyui_launcher_projects/{workspace}/comfyui/custom_nodes |
| 70 | +git clone https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt.git |
| 71 | +cd ./ComfyUI-Depth-Anything-Tensorrt |
| 72 | +pip install -r requirements.txt |
| 73 | +``` |
| 74 | + |
| 75 | +**Download and Build Tensorrt Engine** |
| 76 | +```bash |
| 77 | +wget -O depth_anything_vitl14.onnx https://huggingface.co/yuvraj108c/Depth-Anything-2-Onnx/resolve/main/depth_anything_v2_vitb.onnx?download=true |
| 78 | +python export_trt.py |
| 79 | +mkdir -p /workspace/comfyui_launcher_models/tensorrt/depth-anything/ |
| 80 | +mv depth_anything_vitl14-fp16.engine /workspace/comfyui_launcher_models/tensorrt/depth-anything/ |
| 81 | +``` |
| 82 | + |
| 83 | +**Test Out in Depth Map Generation in ComfyUI** |
| 84 | + |
| 85 | +- Open up the custom node manager, and you should see **`ComfyUI Depth Anything TensorRT`** as a custom node that's installed. |
| 86 | +- You have to click on “Restart”, and reload the browser. |
| 87 | +- If you double-click on an empty part of the canvas, you should be able to search and find `Depth Anything Tensorrt`. Add it to the canvas. |
| 88 | + - If this doesn't show up, try and see if it's because the custom node is installed correctly. If not, you can try to “fix it”. The fixing process will take a while, since it'll be installing some packages. |
| 89 | +- Add a ”Load Image” node as the input, and the “Preview Image” node as the output. You should be able to pick an image and see the depth map. |
| 90 | + |
| 91 | +# Step 4: Bring Up ComfyStream Server |
| 92 | + |
| 93 | +Tunnel to open up UDP connection |
| 94 | + |
| 95 | +- On the remote server (remote_host), run: `socat TCP4-LISTEN:4321,fork UDP4:127.0.0.1:5678` |
| 96 | +- On the local machine: Open an SSH tunnel, run `ssh -L 4321:localhost:4321 {remote_user@remote_host and other params from Runpod ssh cmd}` |
| 97 | +- On the local machine, run `socat UDP4-LISTEN:1234,fork TCP4:127.0.0.1:4321` |
| 98 | + |
| 99 | +Traffic sent to `localhost:1234` on the local machine will be forwarded over the SSH tunnel and will reach `127.0.0.1:5678` on the remote server. |
| 100 | + |
| 101 | +Bring up ComfyStream Server |
| 102 | + |
| 103 | +```bash |
| 104 | +cd /workspace/comfystream |
| 105 | +pip install -r requirements.txt |
| 106 | +python install.py --workspace /workspace/comfyui_launcher_projects/{workspace}/comfyui |
| 107 | +``` |
| 108 | + |
| 109 | +Open another terminal, ssh into the Runpod terminal: |
| 110 | + - You should find "SSH over exposed TCP" value in Runpod. |
| 111 | + - Copy that command, and add `-L 8888:localhost:8888` |
| 112 | + <Note>For example: `ssh -L 8888:localhost:8888 root@{runpod_ip} -p 16974 -i ~/.ssh/id_ed25519`</Note> |
| 113 | + |
| 114 | +Run ComfyStream: |
| 115 | +```bash |
| 116 | +python server/app.py --workspace /workspace/comfyui_launcher_projects/{workspace}/comfyui --media-ports=5678 |
| 117 | +``` |
| 118 | + |
| 119 | +We should now have the ComfyStream server running on port `8888`, with media port `5678` listening on UDP. It's also using the same workspace and sharing the available nodes / models as the ComfyUI engine you were using. |
| 120 | + |
| 121 | +Traffic sent to `localhost:1234` on the local machine will be forwarded over the SSH tunnel and will reach `127.0.0.1:5678` on the remote server. |
| 122 | + |
| 123 | +# Step 4: Run the Front End |
| 124 | + |
| 125 | +Follow the guide [Run ComfyStream UI](./local-testing-comfystream-ui) to start a live stream with new development environment |
0 commit comments