Chi Zhang, Che Wang, Jun-Zhe Wang, Chenming Tang*
{tangchenming}@stu.pku.edu.cn
School of Computer Science, Peking University
* Corresponding author
Computer Vision @ Peking University (2024 Fall)
Tested with python 3.8.20
on Ubuntu 22.04.5 LTS and MacOS Sequoia 15.1.1.
conda create -n fastclipstyler python=3.8
pip install -r requirements.txt
conda install -c anaconda git
pip install git+https://github.com/openai/CLIP.git
Download the ArtEmis Dataset from artemisdataset.
Rename official_data
to artemis
and move to prompts
.
prompts/artemis
should contain artemis_dataset_release_v0.csv
and ola_dataset_release_v0.csv
.
Download GPT-2 from Huggingface🤗.
pip install -U huggingface_hub
huggingface-cli download --resume-download gpt2 --local-dir prompts/gpt2 --local-dir-use-symlinks False
Create prompts/api_key.json
. The JSON file should be like:
{
"base_url": "https://api.openai.com/v1", // OpenAI API base url
"api_key": "sk-..." // Your OpenAI API key
}
cd prompts
python prompt_generation.py
To be done by Chi.
To be done by Che.
In order to run inference with the attached trained model, please run
python inference.py # 其实应该 streamlit run streamlit_demo.py
This will run the inference with the trained FastCLIPstyler model.
To change the text prompt/content image, please change the test_prompts
variable in inference.py
.
To run the EdgeCLIPstyler model, please run change the text_encoder
feild in the params
class to from fastclipstyler
to edgeclipstyler
.