@@ -13,7 +13,46 @@ And you can input gaussian noise to input images.
13
13
1 . Download [ DF2K dataset] ( https://drive.google.com/file/d/1P9pcaGjvq3xiF22GXIq7ciZta3rjZxaY/view?usp=sharing ) . Or other dataset is ok, but directory hierarchy -> ` <NAME>/train/ ` , ` <NAME>/valid/ `
14
14
2 . move under dataset directory (i.e. ` vdsr_pytorch/dataset/DF2K ` )
15
15
16
- ## Training
16
+ ## Usage
17
+ ### Training & Evaluation
18
+ ```
19
+ usage: main.py [-h] --dataset DATASET --crop_size CROP_SIZE
20
+ --upscale_factor UPSCALE_FACTOR [--batch_size BATCH_SIZE]
21
+ [--test_batch_size TEST_BATCH_SIZE] [--epochs EPOCHS] [--lr LR]
22
+ [--step STEP] [--clip CLIP] [--weight-decay WEIGHT_DECAY]
23
+ [--cuda] [--threads THREADS] [--gpuids GPUIDS [GPUIDS ...]]
24
+ [--add_noise] [--noise_std NOISE_STD] [--test] [--model PATH]
25
+
26
+ optional arguments:
27
+ -h, --help show this help message and exit
28
+ --dataset DATASET dataset directory name
29
+ --crop_size CROP_SIZE
30
+ network input size
31
+ --upscale_factor UPSCALE_FACTOR
32
+ super resolution upscale factor
33
+ --batch_size BATCH_SIZE
34
+ training batch size
35
+ --test_batch_size TEST_BATCH_SIZE
36
+ testing batch size
37
+ --epochs EPOCHS number of epochs to train for
38
+ --lr LR Learning Rate. Default=0.001
39
+ --step STEP Sets the learning rate to the initial LR decayed by
40
+ momentum every n epochs, Default: n=10
41
+ --clip CLIP Clipping Gradients. Default=0.4
42
+ --weight-decay WEIGHT_DECAY, --wd WEIGHT_DECAY
43
+ Weight decay, Default: 1e-4
44
+ --cuda use cuda?
45
+ --threads THREADS number of threads for data loader to use
46
+ --gpuids GPUIDS [GPUIDS ...]
47
+ GPU ID for using
48
+ --add_noise add gaussian noise?
49
+ --noise_std NOISE_STD
50
+ standard deviation of gaussian noise
51
+ --test test mode
52
+ --model PATH path to test or resume model
53
+ ```
54
+
55
+ #### Example for training
17
56
```
18
57
> python main.py --dataset DF2K --cuda --gpuids 0 1 --upscale_factor 2 --crop_size 256 --batch_size 128 --test_batch_size 32 --epochs 100
19
58
```
22
61
> python3 main.py --dataset DF2K --cuda --gpuids 0 1 --upscale_factor 2 --crop_size 256 --batch_size 128 --test_batch_size 32 --epochs 100
23
62
```
24
63
25
- ## Test
64
+ ### Example for evaluation
26
65
```
27
66
> python main.py --dataset Urban100 --cuda --gpuids 0 1 --upscale_factor 2 --crop_size 256 --test_batch_size 32 --test --model model_epoch_100.pth
28
67
```
31
70
> python3 main.py --dataset Urban100 --cuda --gpuids 0 1 --upscale_factor 2 --crop_size 256 --test_batch_size 32 --test --model model_epoch_100.pth
32
71
```
33
72
34
- ## Sample usage
73
+ ### Sample usage
74
+ ```
75
+ usage: run.py [-h] --input_image INPUT_IMAGE --model MODEL
76
+ [--output_filename OUTPUT_FILENAME]
77
+ [--scale_factor SCALE_FACTOR] [--cuda]
78
+ [--gpuids GPUIDS [GPUIDS ...]]
79
+
80
+ optional arguments:
81
+ -h, --help show this help message and exit
82
+ --input_image INPUT_IMAGE
83
+ input image to use
84
+ --model MODEL model file to use
85
+ --output_filename OUTPUT_FILENAME
86
+ where to save the output image
87
+ --scale_factor SCALE_FACTOR
88
+ factor by which super resolution needed
89
+ --cuda use cuda
90
+ --gpuids GPUIDS [GPUIDS ...]
91
+ GPU ID for using
92
+ ```
93
+
94
+ #### Example for demonstration
35
95
```
36
96
> python run.py --cuda --gpuids 0 1 --scale_factor 2 --model model_epoch_100.pth --input_image test_scale2x.jpg --output_filename test_scale2x_out.jpg
37
97
```
0 commit comments