Skip to content

Commit

Permalink
pull main
Browse files Browse the repository at this point in the history
  • Loading branch information
wangxingjun778 committed Aug 16, 2023
2 parents 483d4b3 + 042c16c commit 596b136
Show file tree
Hide file tree
Showing 5 changed files with 245 additions and 47 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -131,4 +131,3 @@ result.mp4

# ast template
ast_index_file.py

94 changes: 56 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,19 @@ FaceChain is powered by [ModelScope](https://github.com/modelscope/modelscope).

![image](resources/example3.jpg)

# News
- Colab notebook is available now! You can experience FaceChain directly with our [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing).

# To-Do List
- Support existing style models (such as those on Civitai) in a plug-an-play fashion. --on-going
- Support customizable prompts (try on different outfits etc.) --on-going
- Support customizable poses, with controlnet or composer
- Support more beauty-retouch effects
- Support latest foundation models such as SDXL
- Provide Colab compatability --done
- Provide WebUI compatability


# Installation

## Compatibility Verification
Expand All @@ -39,37 +52,7 @@ The following are the environment dependencies that have been verified:
The following installation methods are supported:


1. Docker【recommended】

If you are familiar with using docker, we recommend to use this way:

```shell
# Step1
# Prepare the environment with GPU on local or cloud, we recommend to use Alibaba Cloud ECS, refer to: https://www.aliyun.com/product/ecs

# Step2
# Download the docker image (for installing docker engine, refer to https://docs.docker.com/engine/install/)
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0

# Step3
docker images
docker run -it --name facechain -p 7860:7860 --gpus all your_xxx_image_id /bin/bash
# Note: you may need to install the nvidia-container-runtime, refer to https://github.com/NVIDIA/nvidia-container-runtime

# Step4
# Install the gradio in the docker container:
pip3 install gradio

# Step5
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1
cd facechain
python3 app.py

# Step6
# Run the app server: click "public URL" --> in the form of: https://xxx.gradio.live
```

2. ModelScope notebook
### 1. ModelScope notebook【recommended】

The ModelScope notebook has a free tier that allows you to run the FaceChain application, refer to [ModelScope Notebook](https://modelscope.cn/my/mynotebook/preset)

Expand All @@ -85,19 +68,49 @@ GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --de
import os
os.chdir('/mnt/workspace/facechain')
print(os.getcwd())

!pip3 install gradio
!python3 app.py

# Step4: click "public URL" or "local URL"

# Step4: click "public URL" or "local URL", upload your images to
# train your own model and then generate your digital twin.
```


3. conda environment
### 2. Docker

You can use conda virtual environment, refer to following link to install the [Anaconda](https://docs.anaconda.com/anaconda/install/)
If you are familiar with using docker, we recommend to use this way:

```shell
# Step1: Prepare the environment with GPU on local or cloud, we recommend to use Alibaba Cloud ECS, refer to: https://www.aliyun.com/product/ecs

# Step2: Download the docker image (for installing docker engine, refer to https://docs.docker.com/engine/install/)
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0

# Step3: run the docker container
docker run -it --name facechain -p 7860:7860 --gpus all registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0 /bin/bash
(Note: you may need to install the nvidia-container-runtime, refer to https://github.com/NVIDIA/nvidia-container-runtime)

# Step4: Install the gradio in the docker container:
pip3 install gradio

# Step5 clone facechain from github
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1
cd facechain
python3 app.py

# Step6
Run the app server: click "public URL" --> in the form of: https://xxx.gradio.live
```

### 3. conda Virtual Environment

Use the conda virtual environment, and refer to [Anaconda](https://docs.anaconda.com/anaconda/install/) to manage your dependencies. After installation, execute the following commands:
(Note: mmcv has strict environment requirements and might not be compatible in some cases. It's recommended to use Docker.)

```shell
conda create -n facechain python=3.8 # versions of 3.8, 3.9 and 3.10 are verified
conda create -n facechain python=3.8 # Verified environments: 3.8 and 3.10
conda activate facechain

GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1
Expand All @@ -107,13 +120,18 @@ pip3 install -r requirements.txt
pip3 install -U openmim
mim install mmcv-full==1.7.0

cd facechain
# Navigate to the facechain directory and run:
python3 app.py

# Note: click "public URL" or "local URL"

# Finally, click on the URL generated in the log to access the web page.
```

**Note**: After the app service is successfully launched, go to the URL in the log, enter the "Image Customization" tab, click "Select Image to Upload", and choose at least one image with a face. Then, click "Start Training" to begin model training. After the training is completed, there will be corresponding displays in the log. Afterwards, switch to the "Image Experience" tab and click "Start Inference" to generate your own digital image.


### 4. colab notebook
Please refer to [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing) for details.


# Script Execution

Expand Down
31 changes: 24 additions & 7 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,24 @@ FaceChain的模型由[ModelScope](https://github.com/modelscope/modelscope)开

![image](resources/example3.jpg)

# News
- Colab notebook安装已支持,您可以直接打开链接体验FaceChain: [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing)


# 待办事项
- 现成风格模型即插即用(以C站风格模型为例) --开发中
- 增加个性化prompt模块(试穿不同的衣服等) --开发中
- 基于ControlNet或Composer等方式,支持指定poses
- 增加个性化美颜特效模块
- 基模升级,SD 1.5升级为SDXL
- 兼容Colab使用 --已完成
- 兼容WebUI使用


# 环境准备

## 兼容性验证
FaceChain是一个组合模型,使用了包括pytorch和tensorflow在内的机器学习框架,以下是已经验证过的主要环境依赖:
FaceChain是一个组合模型,使用了包括PyTorch和TensorFlow在内的机器学习框架,以下是已经验证过的主要环境依赖:
- python环境: py3.8, py3.10
- pytorch版本: torch2.0.0, torch2.0.1
- tensorflow版本: 2.8.0, tensorflow-cpu
Expand All @@ -42,7 +56,7 @@ FaceChain是一个组合模型,使用了包括pytorch和tensorflow在内的机
## 安装指南
支持以下几种安装方式,任选其一:

1. 使用ModelScope提供的notebook环境【推荐】
### 1. 使用ModelScope提供的notebook环境【推荐】

ModelScope(魔搭社区)提供给新用户初始的免费计算资源,参考[ModelScope Notebook](https://modelscope.cn/my/mynotebook/preset)

Expand All @@ -68,7 +82,7 @@ print(os.getcwd())
```


2. docker镜像
### 2. docker镜像

如果您熟悉docker,可以使用我们提供的docker镜像,其包含了模型依赖的所有组件,无需复杂的环境安装:
```shell
Expand All @@ -79,9 +93,8 @@ print(os.getcwd())
# Step2: 将镜像下载到本地 (前提是已经安装了docker engine并启动服务,具体可参考: https://docs.docker.com/engine/install/)
docker pull registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0

# Step3: 获取image id,并运行
docker images
docker run -it --name facechain -p 7860:7860 --gpus all your_xxx_image_id /bin/bash # 注意 your_xxx_image_id 替换成你的镜像id
# Step3: 拉起镜像运行
docker run -it --name facechain -p 7860:7860 --gpus all registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-cuda11.7.1-py38-torch2.0.1-tf1.15.5-1.8.0 /bin/bash # 注意 your_xxx_image_id 替换成你的镜像id
(注意: 如果提示无法使用宿主机GPU的错误,可能需要安装nvidia-container-runtime, 参考:https://github.com/NVIDIA/nvidia-container-runtime)

# Step4: 在容器中安装gradio
Expand All @@ -96,7 +109,7 @@ python3 app.py
```


3. conda虚拟环境
### 3. conda虚拟环境

使用conda虚拟环境,参考[Anaconda](https://docs.anaconda.com/anaconda/install/)来管理您的依赖,安装完成后,执行如下命令:
(提示: mmcv对环境要求较高,可能出现不适配的情况,推荐使用docker方式)
Expand All @@ -118,6 +131,10 @@ python3 app.py
# 最后点击log中生成的URL即可访问页面。
```

### 4. colab运行
支持colab notebook安装,参考: [Colab Notebook](https://colab.research.google.com/drive/1cUhnVXseqD2EJiotZk3k7GsfQK9_yJu_?usp=sharing)



备注:app服务成功启动后,在log中访问页面URL,进入”形象定制“tab页,点击“选择图片上传”,并最少选1张包含人脸的图片;点击“开始训练”即可训练模型。训练完成后日志中会有对应展示,之后切换到“形象体验”标签页点击“开始推理”即可生成属于自己的数字形象。

Expand Down
164 changes: 164 additions & 0 deletions facechain_demo.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"private_outputs": true,
"provenance": [],
"machine_shape": "hm",
"name": "facechain-demo.ipynb"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"source": [
"Requirements:\n",
"- GPU Mem Usage: 19G\n",
"- Disk Usage: About 50G"
],
"metadata": {
"id": "AzOycrSUkK-0"
}
},
{
"cell_type": "markdown",
"source": [
"## Get facechain source code from GitHub"
],
"metadata": {
"id": "8yQEWTkekIx9"
}
},
{
"cell_type": "code",
"source": [
"!GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git --depth 1"
],
"metadata": {
"id": "WK_uDw0NkHTP"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Installation requirements\n",
"- Note that you can ignore warning on dependencies conflicts at the end\n",
"- You may use conda virtual env to avoid warning info."
],
"metadata": {
"id": "jUWWSGDG1ZGK"
}
},
{
"cell_type": "code",
"source": [
"!pip3 install -r facechain/requirements.txt"
],
"metadata": {
"id": "QrwRDBipxKZw"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Check environment informaiton"
],
"metadata": {
"id": "0miaB1nC2VrA"
}
},
{
"cell_type": "code",
"source": [
"!nvidia-smi\n",
"!pip3 show torch\n",
"\n",
"# Note that this setup is verified on (cuda 12.0, torch2.0.1+cu118, Nvidia-A100 40G)"
],
"metadata": {
"id": "uvNE0kRxyI4q"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [],
"metadata": {
"id": "klba298h2bbu"
}
},
{
"cell_type": "markdown",
"source": [
"## Installing mmcv-full\n",
"- Several of the underlying models depend on mmcv-full, which could be tricky to install since it is environment-dependent, you may refer to [mmcv's official documentation](https://mmcv.readthedocs.io/zh_CN/latest/get_started/installation.html) for more details..\n",
"- A prebuilt package is provided here for convenience, and was verified on (cuda 12.0, torch2.0.1+cu118, Nvidia-A100 40G)\n",
"- If the prebuilt package does not work on your env setup, please use the alternative installation as suggested by [mmcv's official documentation](https://mmcv.readthedocs.io/zh_CN/latest/get_started/)"
],
"metadata": {
"id": "ciSxxSzI4l_u"
}
},
{
"cell_type": "code",
"source": [
"# use prebuilt mmcv-full package provided by ModelScope (verified on cuda 12.0, torch2.0.1+cu118, Nvidia-A100 40G)\n",
"!pip3 install https://modelscope.oss-cn-beijing.aliyuncs.com/packages/mmcv/mmcv_full-1.7.0-cp310-cp310-linux_x86_64.whl\n",
"\n",
"\n",
"# alternative manual installtation. note that it may take 30+ mins for building dependencies.\n",
"# !pip3 install -U openmim\n",
"# !mim install mmcv-full==1.7.0\n"
],
"metadata": {
"id": "t2lciBXyyuzA"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Start facechain WebUI service"
],
"metadata": {
"id": "OxDgRY8H4vta"
}
},
{
"cell_type": "code",
"source": [
"import os\n",
"os.chdir('/path/to/facechain') # Note: replace with your facechain root\n",
"print(os.getcwd())\n",
"\n",
"# Click on the gradio URL to start building your FaceChain digital-twin!\n",
"!python3 app.py"
],
"metadata": {
"id": "NW5o0iiezTCg"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "Ghuyu_0Gx5XN"
},
"execution_count": null,
"outputs": []
}
]
}
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ accelerate
transformers
diffusers
onnxruntime
modelscope[framework]
modelscope
Pillow
opencv-python
torchvision
Expand Down

0 comments on commit 596b136

Please sign in to comment.