Skip to content

Commit

Permalink
new bib
Browse files Browse the repository at this point in the history
  • Loading branch information
dailenson authored and dai gang committed Jun 26, 2023
0 parents commit 301897d
Show file tree
Hide file tree
Showing 24 changed files with 2,145 additions and 0 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
__pycache__
data/*
Saved/*
model_zoo/*.pth
auto_*
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 Gang Dai

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
121 changes: 121 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
![MIT LICENSE](https://shields.io/badge/license-MIT-green)
![python 3.8](https://img.shields.io/badge/python-3.8-brightgreen)
# 🔥 Disentangling Writer and Character Styles for Handwriting Generation
## 📢 Introduction
- The proposed style-disentangled Transformer (SDT) generates online handwritings with conditional content and style. Existing RNN-based methods mainly focus on capturing a person’s overall writing style, neglecting subtle style inconsistencies between characters written by the same person. In light of this, SDT disentangles the writer-wise and character-wise style representations from individual handwriting samples for enhancing imitation performance.
- We extend SDT and introduce an offline-to-offline framework for improving the generation quality of offline Chinese handwritings.

![overview_sdt](static/overview_sdt.jpg)

## 📺 Handwriting generation results
- **Online Chinese handwriting generation**
![online Chinese](static/online_Chinese.jpg)
- **Applications to various scripts**
![other scripts](static/various_scripts.jpg)
- **Extension on offline Chinese handwriting generation**
![offline Chinese](static/offline_Chinese.jpg)


## 🔨 Requirements
```
python 3.8
pytorch >=1.8
easydict 1.9
einops 0.4.1
```
## 📂 Folder Structure
```
SDT/
├── train.py - main script to start training
├── test.py - generate characters via trained model
├── evaluate.py - evaluation of generated samples
├── configs/*.yml - holds configuration for training
├── parse_config.py - class to handle config file
├── data_loader/ - anything about data loading goes here
│ └── loader.py
├── model_zoo/ - pre-trained content encoder model
├── data/ - default directory for storing experimental datasets
├── model/ - networks, models and losses
│ ├── encoder.py
│ ├── gmm.py
│ ├── loss.py
│ ├── model.py
│ └── transformer.py
├── saved/
│ ├── models/ - trained models are saved here
│ ├── tborad/ - tensorboard visualization
│ └── samples/ - visualization samples in the training process
├── trainer/ - trainers
│ └── trainer.py
└── utils/ - small utility functions
├── util.py
└── logger.py - set log dir for tensorboard and logging output
```

## 💿 Datasets

We provide Chinese, Japanese and English datasets in [Google Drive](https://drive.google.com/drive/folders/17Ju2chVwlNvoX7HCKrhJOqySK-Y-hU8K?usp=share_link) | [Baidu Netdisk](https://pan.baidu.com/s/1RNQSRhBAEFPe2kFXsHZfLA) PW:xu9u. Please download these datasets, uzip them and move the extracted files to /data.

## 🍔 Pre-trained model
- We provide the pre-trained content encoder model in [Google Drive](https://drive.google.com/drive/folders/1N-MGRnXEZmxAW-98Hz2f-o80oHrNaN_a?usp=share_link) | [Baidu Netdisk](https://pan.baidu.com/s/1RNQSRhBAEFPe2kFXsHZfLA) PW:xu9u. Please download and put it to the /model_zoo.
- We provide the well-trained SDT model in [Google Drive](https://drive.google.com/drive/folders/1N-MGRnXEZmxAW-98Hz2f-o80oHrNaN_a?usp=share_link) | [Baidu Netdisk](https://pan.baidu.com/s/1RNQSRhBAEFPe2kFXsHZfLA) PW:xu9u, so that users can get rid of retraining one and play it right away.

## 🚀 Training & Test
**Training**
- To train the SDT on the Chinese dataset, run this command:
```
python train.py --cfg configs/CHINESE_CASIA.yml --log Chinese_log
```

- To train the SDT on the Japanese dataset, run this command:
```
python train.py --cfg configs/Japanese_TUATHANDS.yml --log Japanese_log
```

- To train the SDT on the English dataset, run this command:
```
python train.py --cfg configs/English_CASIA.yml --log English_log
```

**Qualitative Test**
- To generate Chinese handwritings with our SDT, run this command:
```
python test.py --pretrained_model checkpoint_path --store_type online --sample_size 500 --dir Generated/Chinese
```

- To generate Japanese handwritings with our SDT, run this command:
```
python test.py --pretrained_model checkpoint_path --store_type online --sample_size 500 --dir Generated/Japanese
```

- To generate English handwritings with our SDT, run this command:
```
python test.py --pretrained_model checkpoint_path --store_type online --sample_size 500 --dir Generated/English
```

**Quantitative Evaluation**
- To evaluate the generated handwritings, you need to set `data_path` to the path of the generated handwritings (e.g., Generated/Chinese), and run this command:
```
python evaluate.py --data_path Generated/Chinese
```

## ❤️ Citation
If you find our work inspiring or use our codebase in your research, please cite our work:
```
@inproceedings{dai2023disentangling,
title={Disentangling Writer and Character Styles for Handwriting Generation},
author={Dai, Gang and Zhang, Yifan and Wang, Qingfeng and Du, Qing and Yu, Zhuliang and Liu, Zhuoman and Huang, Shuangping},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages={5977--5986},
year={2023}
}
```
33 changes: 33 additions & 0 deletions configs/CHINESE_CASIA.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
MODEL:
ENCODER_LAYERS: 2
WRI_DEC_LAYERS: 2
GLY_DEC_LAYERS: 2
NUM_HEAD_LAYERS: 1
NUM_IMGS: 15
NUM_GPUS: 1 # TODO, support multi GPUs
SOLVER:
BASE_LR: 0.0002
MAX_ITER: 200000
WARMUP_ITERS: 20000
TYPE: Adam # TODO, support optional optimizer
GRAD_L2_CLIP: 5.0
TRAIN:
ISTRAIN: True
IMS_PER_BATCH: 64
SNAPSHOT_BEGIN: 2000
SNAPSHOT_ITERS: 4000
VALIDATE_ITERS: 2000
VALIDATE_BEGIN: 2000
SEED: 1001
IMG_H: 64
IMG_W: 64
TEST:
ISTRAIN: False
IMG_H: 64
IMG_W: 64
DATA_LOADER:
NUM_THREADS: 8
CONCAT_GRID: True
TYPE: ScriptDataset
PATH: data
DATASET: CHINESE
33 changes: 33 additions & 0 deletions configs/English_CASIA.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
MODEL:
ENCODER_LAYERS: 2
WRI_DEC_LAYERS: 2
GLY_DEC_LAYERS: 2
NUM_HEAD_LAYERS: 1
NUM_IMGS: 15
NUM_GPUS: 1 # TODO, support multi GPUs
SOLVER:
BASE_LR: 0.0002
MAX_ITER: 200000
WARMUP_ITERS: 20000
TYPE: Adam # TODO, support optional optimizer
GRAD_L2_CLIP: 5.0
TRAIN:
ISTRAIN: True
IMS_PER_BATCH: 64
SNAPSHOT_BEGIN: 2000
SNAPSHOT_ITERS: 4000
VALIDATE_ITERS: 2000
VALIDATE_BEGIN: 2000
SEED: 1001
IMG_H: 64
IMG_W: 64
TEST:
ISTRAIN: False
IMG_H: 64
IMG_W: 64
DATA_LOADER:
NUM_THREADS: 8
CONCAT_GRID: True
TYPE: ScriptDataset
PATH: data
DATASET: ENGLISH
33 changes: 33 additions & 0 deletions configs/Japanese_TUATHANDS.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
MODEL:
ENCODER_LAYERS: 2
WRI_DEC_LAYERS: 2
GLY_DEC_LAYERS: 2
NUM_HEAD_LAYERS: 1
NUM_IMGS: 15
NUM_GPUS: 1 # TODO, support multi GPUs
SOLVER:
BASE_LR: 0.0002
MAX_ITER: 200000
WARMUP_ITERS: 20000
TYPE: Adam # TODO, support optional optimizer
GRAD_L2_CLIP: 5.0
TRAIN:
ISTRAIN: True
IMS_PER_BATCH: 64
SNAPSHOT_BEGIN: 2000
SNAPSHOT_ITERS: 4000
VALIDATE_ITERS: 2000
VALIDATE_BEGIN: 2000
SEED: 1001
IMG_H: 64
IMG_W: 64
TEST:
ISTRAIN: False
IMG_H: 64
IMG_W: 64
DATA_LOADER:
NUM_THREADS: 8
CONCAT_GRID: True
TYPE: ScriptDataset
PATH: data
DATASET: JANPANESE
1 change: 1 addition & 0 deletions data
Loading

0 comments on commit 301897d

Please sign in to comment.