Skip to content

Commit

Permalink
docker container
Browse files Browse the repository at this point in the history
  • Loading branch information
mly-johndpope committed Jan 29, 2025
1 parent ae7c9e7 commit e0b2fda
Show file tree
Hide file tree
Showing 2 changed files with 102 additions and 0 deletions.
55 changes: 55 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Start from NVIDIA TensorRT base image
FROM nvcr.io/nvidia/tensorrt:24.01-py3

# Set working directory
WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
git-lfs \
ffmpeg \
libsndfile1 \
&& rm -rf /var/lib/apt/lists/*

# Install Python packages
RUN pip install --no-cache-dir \
torch \
tensorrt==8.6.1 \
librosa \
tqdm \
filetype \
imageio \
opencv-python-headless \
scikit-image \
cython \
cuda-python \
imageio-ffmpeg \
colored \
polygraphy \
numpy==2.0.1

# Clone the repository
RUN git clone https://github.com/antgroup/ditto-talkinghead .

# Download model checkpoints
RUN git lfs install && \
git clone https://huggingface.co/digital-avatar/ditto-talkinghead checkpoints

# Build Cython extensions
RUN cd core/utils/blend && \
python -m cython blend.pyx && \
gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing \
-I/usr/include/python3.10 \
-o blend_impl.so blend_impl.c

# Set environment variables
ENV PYTHONPATH=/app:$PYTHONPATH

# Command to run inference (can be overridden)
CMD ["python", "inference.py", \
"--data_root", "./checkpoints/ditto_trt_Ampere_Plus", \
"--cfg_pkl", "./checkpoints/ditto_cfg/v0.4_hubert_cfg_trt.pkl", \
"--audio_path", "./example/audio.wav", \
"--source_path", "./example/image.png", \
"--output_path", "./output/result.mp4"]
47 changes: 47 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,53 @@ python script/cvt_onnx_to_trt.py --onnx_dir "./checkpoints/ditto_onnx" --trt_dir
Then run `inference.py` with `--data_root=./checkpoints/ditto_trt_custom`.




## Docker + nvidia runtime container

Build the container:
```shell
docker build -t ditto-talkinghead .
```
Run the container with GPU support:

```shell
docker run --gpus all -v $(pwd)/output:/app/output ditto-talkinghead
```
Or to run with custom input files:

```shell
docker run --gpus all \
-v $(pwd)/input:/app/input \
-v $(pwd)/output:/app/output \
ditto-talkinghead \
python inference.py \
--data_root "./checkpoints/ditto_trt_Ampere_Plus" \
--cfg_pkl "./checkpoints/ditto_cfg/v0.4_hubert_cfg_trt.pkl" \
--audio_path "/app/input/your_audio.wav" \
--source_path "/app/input/your_image.png" \
--output_path "/app/output/result.mp4"
```

To run the container you need nvidia runtime container

# Setup the package repository and GPG key
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg

curl -fsSL https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Update package listing and install
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit

# Configure the Docker daemon to recognize NVIDIA runtime
sudo nvidia-ctk runtime configure --runtime=docker

# Restart Docker daemon
sudo systemctl restart docker

## 📧 Acknowledgement
Our implementation is based on [S2G-MDDiffusion](https://github.com/thuhcsi/S2G-MDDiffusion) and [LivePortrait](https://github.com/KwaiVGI/LivePortrait). Thanks for their remarkable contribution and released code! If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.

Expand Down

0 comments on commit e0b2fda

Please sign in to comment.