Zhen Yang*
·
Ganggui Ding*
·
Wen Wang*
·
Hao Chen*
·
Bohan Zhuang†
·
Chunhua Shen*
*Zhejiang University †Monash University
This code was tested with Python 3.9, Pytorch 2.0.1 using pre-trained models through huggingface / diffusers. Specifically, we implemented our method over Stable Diffusion 1.4. Additional required packages are listed in the requirements file. The code was tested on a NVIDIA GeForce RTX 3090 but should work on other cards.
- Download OIR-Bench.
- Create the environment and install the dependencies by running:
conda create -n oir python=3.9
conda activate oir
pip install -r requirements.txt
- Change the basic_config.py in configs/, change the model path and hyperparameters.
- Modify multi_object_edit.yaml or single_object_edit.yaml in configs/ according to multi_object.yaml and single_object.yaml in OIR-Bench/.
- Run single_object_edit.py (Search Metric in paper) or multi_object_edit.py (OIR in paper) to implement image editing.
- Use prompt_change as dict's key may lead to error.
- Different editing pairs' masks mustn't have overlap.
Many thanks for the generous help in building the project website from Minghan Li.
If you find our work useful, please consider citing:
@article{yang2023OIR,
title={Object-aware Inversion and Reassembly for Image Editing},
author={Yang, Zhen and Ding, Ganggui and Wang, Wen and Chen, Hao and Zhuang, Bohan and Shen, Chunhua},
publisher={arXiv preprint arXiv:2310.12149},
year={2023},
}