DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models
-
Install packages in
requirements.txt
. We test our model on 3090/4090/V100/A6000 with 11.8 CUDA and 2.0.0 pytorch.git clone https://github.com/zzzyuqing/DreamMat.git cd DreamMat pip install -r requirements.txt
-
Install Blender
Download blender-3.2.2-linux-x64.tar.xz
Run:
tar -xvf blender-3.2.2-linux-x64.tar.xz export PATH=$PATH:path_to_blender/blender-3.2.2-linux-x64
-
Download the pre-trained ControlNet checkpoints here or from hugging face, and put it to the
threestudio_dreammat/model/controlnet
-
A docker env can be found at https://hub.docker.com/repository/docker/zzzyuqing/dreammat_image/general
cd threestudio_dreammat
sh cmd/run_examples.sh
Upon initial execution, each model will undergo pre-rendering using Blender, with an approximate duration of 15 minutes on a 4090 GPU. During this period, there will be no output; thus, patience is requested. For subsequent runs, the blender_generate
can be set to false
to bypass this process.
You can also train your own geometry- and light-aware ControlNet. The methods for dataset generation and the training code are presented as follows.
Make sure the environment map folder structure as
dataset
|-- <env_dir>
|-- map1
|-- map1.exr
|-- map2
|-- map2.exr
|-- map3
|-- map3.exr
|-- map4
|-- map4.exr
|-- map5
|-- map5.exr
Run the following code to generate pre-rendered data for training
cd controlnet_train
blender -b -P blender_script_geometry.py -- \
--object_path ./dataset/model/046e3307c74746a58ec4bea5b33b7b97.glb \
--output_dir ./dataset/training_data \
--elevation 30 \
--num_images 16
blender -b -P blender_script_light.py -- \
--object_path ./dataset/model/046e3307c74746a58ec4bea5b33b7b97.glb \
--env_dir ./dataset/envmap \
--output_dir ./dataset/training_data \
--elevation 30 \
--num_images 16
The dataset folder structure will be as follows
dataset
|-- training_data
|-- <uid_0>
|-- color
|-- 000_color_env1.png
|-- ...
|-- depth
|-- 000.png
|-- ...
|-- light
|-- 000_m0.0r0.0_env1.png
|-- ...
|-- normal
|-- 000.png
|-- ...
|-- <uid_1>
|-- ...
before training, make sure that the json file of prompts is in the format of
{
"<uid_0>" : "<prompt_0>",
"<uid_1>" : "<prompt_1>",
"<uid_2>" : "<prompt_2>",
...
}
and the directory of training data is in the structure of
training_data
|-- <uid_0>
|-- <uid_1>
|-- <uid_2>
|-- ...
We provide several data as examples here.
run the training
cd controlnet_train
accelerate launch diffusers_train_controlnet.py --config config.json
We have intensively borrow codes from the following repositories. Many thanks to the authors for sharing their codes.
In addition to the 3D model from Objaverse, we express our profound appreciation to the contributors of the following 3D models:
- Bobcat machine by mohamed ouartassi.
- Molino De Viento _ Windmill by BC-X.
- MedivalHouse | house for living | MedivalVilage by JFred-chill.
- Houseleek plant by matousekfoto.
- Jagernaut (Beyond Human) by skartemka.
- Grabfigur by noe-3d.at.
- Teenage Mutant Ninja Turtles - Raphael by Hellbruch.
- Cat with jet pack by Muru.
- Transformers Universe: Autobot Showdown by Primus03.
- PigMan by Grigorii Ischenko.
- Bulky Knight by Arthur Krut.
- Sir Frog by Adrian Carter.
- Infantry Helmet by Masonsmith2020.
- Sailing Ship Model by Andrea Spognetta (Spogna).
- Venice Mask by DailyArt.
- Bouddha Statue Photoscanned by amcgi.
- Bunny by vivienne0716.
- Baby Animals Statuettes by Andrei Alexandrescu.
- Durian The King of Fruits by Laithai.
- Wooden Shisa (Okinawan Guardian Lion) by Vlad Erium.
We express our profound gratitude to Ziyi Yang for his insightful discussions during the project, and to Lei Yang for his comprehensive coordination and planning. This research work was supported by Information Technology Center and Tencent Lightspeed Studios. Concurrently, we are also exploring more advanced 3D representation and inverse rendering technologies such as Spec-Gaussian and SIRe-IR.
If you find this repository useful in your project, please cite the following work. :)
@article{10.1145/3658170,
author = {Zhang, Yuqing and Liu, Yuan and Xie, Zhiyu and Yang, Lei and Liu, Zhongyuan and Yang, Mengzhou and Zhang, Runze and Kou, Qilong and Lin, Cheng and Wang, Wenping and Jin, Xiaogang},
title = {DreamMat: High-quality PBR Material Generation with Geometry- and Light-aware Diffusion Models},
year = {2024},
issue_date = {July 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {43},
number = {4},
issn = {0730-0301},
url = {https://doi.org/10.1145/3658170},
doi = {10.1145/3658170},
journal = {ACM Trans. Graph.},
month = {jul},
articleno = {39},
numpages = {18},
keywords = {3D generation, text-guided texturing, inverse rendering}
}