Skip to content

Commit e844346

Browse files
authored
Merge pull request #43 from livepeer/local-dev-guide
add local dev guides
2 parents 71ddf3c + 1e9b992 commit e844346

File tree

4 files changed

+340
-129
lines changed

4 files changed

+340
-129
lines changed

apps/docs/mint.json

+4-1
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,10 @@
9090
"pages": [
9191
"technical/reference/available-nodes",
9292
"technical/reference/performance-recommendations",
93-
"technical/reference/local-testing"
93+
"technical/reference/local-testing",
94+
"technical/reference/local-testing-comfyuilauncher",
95+
"technical/reference/local-testing-comfystream-ui"
96+
9497
]
9598
}
9699
],
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
---
2+
title: "Run ComfyStream UI"
3+
description: "Run the UI for ComfyStream to demonstrate live AI video"
4+
icon: "box"
5+
---
6+
You can run the frontend on your local machine to test the ComfyStream API on a remote host.
7+
8+
System pre-requisites:
9+
- [ComfyStream](https://github.com/yondonfu/comfystream)
10+
- Node.js
11+
12+
1. Clone ComfyStream UI and install dependencies
13+
```bash
14+
git clone https://github.com/yondonfu/comfystream
15+
cd comfystream/ui
16+
```
17+
2. Install packages and run:
18+
```bash
19+
cd ui
20+
npm install --legacy-peer-deps
21+
npm run dev
22+
```
23+
Now you should be able to visit http://localhost:3000 to begin a livestream using the ComfyStream API
24+
25+
## Customizing the default stream URL
26+
If you are not running ComfyStream UI on the same host as the ComfyStream API, you can customize the default stream URL to ease testing.
27+
28+
Create a new `.env` file with your running ComfyStream API URL:
29+
30+
```bash
31+
cd comfystream/ui
32+
echo "NEXT_PUBLIC_DEFAULT_STREAM_URL=http://10.10.10.1:8888" > .env
33+
```
34+
35+
## Supported workflow JSON
36+
37+
Save this as a JSON file and import it into ComfyStream UI
38+
<Tabs>
39+
<Tab title="Segment Anything 2">
40+
```json
41+
{
42+
"1": {
43+
"inputs": {
44+
"images": [
45+
"2",
46+
0
47+
],
48+
},
49+
"class_type": "LoadImage"
50+
},
51+
"2": {
52+
"inputs": {
53+
"filename_prefix": "ComfyUI",
54+
"images": [
55+
"8",
56+
0
57+
]
58+
},
59+
"class_type": "SaveImage"
60+
},
61+
"7": {
62+
"inputs": {
63+
"model": "sam2_hiera_tiny.pt",
64+
"segmentor": "realtime",
65+
"device": "cuda",
66+
"precision": "fp16"
67+
},
68+
"class_type": "DownloadAndLoadSAM2RealtimeModel"
69+
},
70+
"8": {
71+
"inputs": {
72+
"coordinates_positive": "[[384, 384]]",
73+
"coordinates_negative": "",
74+
"reset_tracking": true,
75+
"images": [
76+
"1",
77+
0
78+
],
79+
"sam2_model": [
80+
"7",
81+
0
82+
]
83+
},
84+
"class_type": "Sam2RealtimeSegmentation"
85+
}
86+
}
87+
```
88+
</Tab>
89+
<Tab title="Depth Anything 2">
90+
```json
91+
{
92+
"1": {
93+
"inputs": {
94+
"images": [
95+
"2",
96+
0
97+
]
98+
},
99+
"class_type": "SaveTensor",
100+
"_meta": {
101+
"title": "SaveTensor"
102+
}
103+
},
104+
"2": {
105+
"inputs": {
106+
"engine": "depth_anything_vitl14-fp16.engine",
107+
"images": [
108+
"3",
109+
0
110+
]
111+
},
112+
"class_type": "DepthAnythingTensorrt",
113+
"_meta": {
114+
"title": "Depth Anything Tensorrt"
115+
}
116+
},
117+
"3": {
118+
"inputs": {},
119+
"class_type": "LoadTensor",
120+
"_meta": {
121+
"title": "LoadTensor"
122+
}
123+
}
124+
}
125+
```
126+
</Tab>
127+
</Tabs>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,125 @@
1+
---
2+
title: "Develop with ComfyUI-Launcher"
3+
description: "ComfyUI-Launcher Development Guide"
4+
icon: "code"
5+
---
6+
[ComfyUI-Launcher](https://github.com/ComfyWorkflows/ComfyUI-Launcher) allows you to easily launch multiple ComfyUI workspaces. Following [RunPod.md](https://github.com/ComfyWorkflows/ComfyUI-Launcher/blob/main/cloud/RUNPOD.md), we can deploy ComfyUI-Launcher to RunPod.
7+
8+
You can monitor the container logs through Runpod, which is very helpful for debugging issues as you install nodes. In my experience, hacking on ComfyUI workflows require a decent amount of debugging around installing nodes.
9+
10+
I also recommend setting up SSH public keys, so you can directly ssh into the GPU server. This will help us with installing custom nodes, and setting up ssh tunnels.
11+
12+
ComfyUI-Launcher will create two folders:
13+
- `/workspace/comfyui_launcher_models` - where all the models will be stored
14+
- `/workspace/comfyui_launcher_projects` - where all the ComfyUI workspaces will be stored
15+
16+
You can create multiple ComfyUI workspaces, and they will live in `/workspace/comfyui_launcher_projects/{workspace}/comfyui`
17+
<Note>
18+
Ignore the `/workspace/comfyui_launcher_projects/{workspace}/comfyui/models` directory - all of your models will be in `/workspace/comfyui_launcher_models`
19+
</Note>
20+
When you create an “empty” workflow, you will see the new workspace with the same name. Note, you have to install a model in the ComfyUI manager GUI so the “Load Checkpoint” node can find a model.
21+
22+
Enjoy generating your first purple bottle image.
23+
24+
## Step 2: Install ComfyStream
25+
26+
First, we install Conda
27+
```bash
28+
cd /workspace
29+
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
30+
./Miniconda3-latest-Linux-x86_64.sh
31+
export CONDA_PATH=$HOME/miniconda3
32+
eval "$($CONDA_PATH/bin/conda shell.bash hook)"
33+
```
34+
Verify Conda has been installed using `which conda` - you should see “/root/miniconda3/bin/conda”
35+
36+
```bash
37+
export CONDA_PATH=$HOME/miniconda3
38+
eval "$($CONDA_PATH/bin/conda shell.bash hook)"
39+
```
40+
41+
Next, we create and activate a Conda environment
42+
```bash
43+
conda create -n comfystream python=3.11
44+
conda activate comfystream
45+
conda install pytorch torchvision -c pytorch
46+
pip install twilio aiortc torchaudio scikit-image && pip install .
47+
```
48+
49+
Next, we install ComfyStream
50+
```bash
51+
cd /workspace
52+
git clone https://github.com/yondonfu/comfystream.git
53+
cd comfystream
54+
pip install .
55+
```
56+
57+
After that, we copy the `tensor_utils` nodes into the `custom_nodes` folder in the ComfyUI workspace
58+
59+
```bash
60+
cp -r nodes/tensor_utils /workspace/comfyui_launcher_projects/{workspace}/comfyui/custom_nodes
61+
```
62+
63+
## Step 3: Install DepthAnything Tensorrt Node
64+
65+
The `https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt` custom node generates depth maps from images. Installing the node is pretty manual.
66+
67+
**Installation**
68+
```bash
69+
cd /workspace/comfyui_launcher_projects/{workspace}/comfyui/custom_nodes
70+
git clone https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt.git
71+
cd ./ComfyUI-Depth-Anything-Tensorrt
72+
pip install -r requirements.txt
73+
```
74+
75+
**Download and Build Tensorrt Engine**
76+
```bash
77+
wget -O depth_anything_vitl14.onnx https://huggingface.co/yuvraj108c/Depth-Anything-2-Onnx/resolve/main/depth_anything_v2_vitb.onnx?download=true
78+
python export_trt.py
79+
mkdir -p /workspace/comfyui_launcher_models/tensorrt/depth-anything/
80+
mv depth_anything_vitl14-fp16.engine /workspace/comfyui_launcher_models/tensorrt/depth-anything/
81+
```
82+
83+
**Test Out in Depth Map Generation in ComfyUI**
84+
85+
- Open up the custom node manager, and you should see **`ComfyUI Depth Anything TensorRT`** as a custom node that's installed.
86+
- You have to click on “Restart”, and reload the browser.
87+
- If you double-click on an empty part of the canvas, you should be able to search and find `Depth Anything Tensorrt`. Add it to the canvas.
88+
- If this doesn't show up, try and see if it's because the custom node is installed correctly. If not, you can try to “fix it”. The fixing process will take a while, since it'll be installing some packages.
89+
- Add a ”Load Image” node as the input, and the “Preview Image” node as the output. You should be able to pick an image and see the depth map.
90+
91+
# Step 4: Bring Up ComfyStream Server
92+
93+
Tunnel to open up UDP connection
94+
95+
- On the remote server (remote_host), run: `socat TCP4-LISTEN:4321,fork UDP4:127.0.0.1:5678`
96+
- On the local machine: Open an SSH tunnel, run `ssh -L 4321:localhost:4321 {remote_user@remote_host and other params from Runpod ssh cmd}`
97+
- On the local machine, run `socat UDP4-LISTEN:1234,fork TCP4:127.0.0.1:4321`
98+
99+
Traffic sent to `localhost:1234` on the local machine will be forwarded over the SSH tunnel and will reach `127.0.0.1:5678` on the remote server.
100+
101+
Bring up ComfyStream Server
102+
103+
```bash
104+
cd /workspace/comfystream
105+
pip install -r requirements.txt
106+
python install.py --workspace /workspace/comfyui_launcher_projects/{workspace}/comfyui
107+
```
108+
109+
Open another terminal, ssh into the Runpod terminal:
110+
- You should find "SSH over exposed TCP" value in Runpod.
111+
- Copy that command, and add `-L 8888:localhost:8888`
112+
<Note>For example: `ssh -L 8888:localhost:8888 root@{runpod_ip} -p 16974 -i ~/.ssh/id_ed25519`</Note>
113+
114+
Run ComfyStream:
115+
```bash
116+
python server/app.py --workspace /workspace/comfyui_launcher_projects/{workspace}/comfyui --media-ports=5678
117+
```
118+
119+
We should now have the ComfyStream server running on port `8888`, with media port `5678` listening on UDP. It's also using the same workspace and sharing the available nodes / models as the ComfyUI engine you were using.
120+
121+
Traffic sent to `localhost:1234` on the local machine will be forwarded over the SSH tunnel and will reach `127.0.0.1:5678` on the remote server.
122+
123+
# Step 4: Run the Front End
124+
125+
Follow the guide [Run ComfyStream UI](./local-testing-comfystream-ui) to start a live stream with new development environment

0 commit comments

Comments
 (0)