This repository contains training, generation and utility scripts for Stable Diffusion.
Change History is moved to the bottom of the page. æŽæ°å±¥æŽã¯ããŒãžæ«å°Ÿã«ç§»ããŸããã
For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!
This repository contains the scripts for:
- DreamBooth training, including U-Net and Text Encoder
- Fine-tuning (native training), including U-Net and Text Encoder
- LoRA training
- Texutl Inversion training
- Image generation
- Model conversion (supports 1.x and 2.x, Stable Diffision ckpt/safetensors and Diffusers)
Stable Diffusion web UI now seems to support LoRA trained by sd-scripts
. Thank you for great work!!!
These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)
The scripts are tested with PyTorch 1.12.1 and 1.13.0, Diffusers 0.10.2.
Most of the documents are written in Japanese.
English translation by darkstorm2150 is here. Thanks to darkstorm2150!
- Training guide - common : data preparation, options etc...
- Dataset config
- DreamBooth training guide
- Step by Step fine-tuning guide:
- training LoRA
- training Textual Inversion
- Image generation
- note.com Model conversion
Python 3.10.6 and Git:
- Python 3.10.6: https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
- git: https://git-scm.com/download/win
Give unrestricted script access to powershell so venv can work:
- Open an administrator powershell window
- Type
Set-ExecutionPolicy Unrestricted
and answer A - Close admin powershell window
Open a regular Powershell terminal and type the following inside:
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts
python -m venv venv
.\venv\Scripts\activate
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
accelerate config
update: python -m venv venv
is seemed to be safer than python -m venv --system-site-packages venv
(some user have packages in global python).
Answers to accelerate config:
- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16
note: Some user reports ValueError: fp16 mixed precision requires a GPU
is occurred in training. In this case, answer 0
for the 6th question:
What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:
(Single GPU with id 0
will be used.)
Other versions of PyTorch and xformers seem to have problems with training. If there is no other reason, please install the specified version.
For Lion8bit, you need to upgrade bitsandbytes
to 0.38.0 or later. Uninstall bitsandbytes
, and for Windows, install the Windows version whl file from here or other sources, like:
pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl
For upgrading, upgrade this repo with pip install .
, and upgrade necessary packages manually.
When a new release comes out you can upgrade your repo with the following command:
cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
Once the commands have completed successfully you should be ready to use the new version.
The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!
The LoRA expansion to Conv2d 3x3 was initially released by cloneofsimo and its effectiveness was demonstrated at LoCon by KohakuBlueleaf. Thank you so much KohakuBlueleaf!
The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's and LoCon), however portions of the project are available under separate license terms:
Memory Efficient Attention Pytorch: MIT
bitsandbytes: MIT
BLIP: BSD-3-Clause
-
Prodigy optimizer is supported in each training script. It is a member of D-Adaptation and is effective for DyLoRA training. PR #585 Please see the PR for details. Thanks to sdbds!
- Install the package with
pip install prodigyopt
. Then specify the option like--optimizer_type="prodigy"
.
- Install the package with
-
Arbitrary Dataset is supported in each training script (except XTI). You can use it by defining a Dataset class that returns images and captions.
- Prepare a Python script and define a class that inherits
train_util.MinimalDataset
. Then specify the option like--dataset_class package.module.DatasetClass
in each training script. - Please refer to
MinimalDataset
for implementation. I will prepare a sample later.
- Prepare a Python script and define a class that inherits
-
The following features have been added to the generation script.
- Added an option
--highres_fix_disable_control_net
to disable ControlNet in the 2nd stage of Highres. Fix. Please try it if the image is disturbed by some ControlNet such as Canny. - Added Variants similar to sd-dynamic-propmpts in the prompt.
- If you specify
{spring|summer|autumn|winter}
, one of them will be randomly selected. - If you specify
{2$$chocolate|vanilla|strawberry}
, two of them will be randomly selected. - If you specify
{1-2$$ and $$chocolate|vanilla|strawberry}
, one or two of them will be randomly selected and connected byand
. - You can specify the number of candidates in the range
0-2
. You cannot omit one side like-2
or1-
. - It can also be specified for the prompt option.
- If you specify
e
orE
, all candidates will be selected and the prompt will be repeated multiple times (--images_per_prompt
is ignored). It may be useful for creating X/Y plots. - You can also specify
--am {e$$0.2|0.4|0.6|0.8|1.0},{e$$0.4|0.7|1.0} --d 1234
. In this case, 15 prompts will be generated with 5*3. - There is no weighting function.
- If you specify
- Added an option
-
ååŠç¿ã¹ã¯ãªããã§Prodigyãªããã£ãã€ã¶ããµããŒããããŸãããD-Adaptationã®ä»²éã§DyLoRAã®åŠç¿ã«æå¹ãšã®ããšã§ãã PR #585 詳现ã¯PRãã芧ãã ãããsdbdsæ°ã«æè¬ããŸãã
pip install prodigyopt
ãšããŠããã±ãŒãžãã€ã³ã¹ããŒã«ããŠãã ããããŸã--optimizer_type="prodigy"
ã®ããã«ãªãã·ã§ã³ãæå®ããŸãã
-
ååŠç¿ã¹ã¯ãªããã§ä»»æã®DatasetããµããŒãããŸããïŒXTIãé€ãïŒãç»åãšãã£ãã·ã§ã³ãè¿ãDatasetã¯ã©ã¹ãå®çŸ©ããããšã§ãåŠç¿ã¹ã¯ãªããããå©çšã§ããŸãã
- Pythonã¹ã¯ãªãããçšæãã
train_util.MinimalDataset
ãç¶æ¿ããã¯ã©ã¹ãå®çŸ©ããŠãã ããããããŠååŠç¿ã¹ã¯ãªããã®ãªãã·ã§ã³ã§--dataset_class package.module.DatasetClass
ã®ããã«æå®ããŠãã ããã - å®è£
æ¹æ³ã¯
MinimalDataset
ãåèã«ããŠãã ãããã®ã¡ã»ã©ãµã³ãã«ãçšæããŸãã
- Pythonã¹ã¯ãªãããçšæãã
-
çæã¹ã¯ãªããã«ä»¥äžã®æ©èœè¿œå ãè¡ããŸããã
- Highres. Fixã®2nd stageã§ControlNetãç¡å¹åãããªãã·ã§ã³
--highres_fix_disable_control_net
ãè¿œå ããŸãããCannyçäžéšã®ControlNetã§ç»åãä¹±ããå Žåã«ãè©Šããã ããã - ããã³ããã§sd-dynamic-propmptsã«äŒŒãVariantããµããŒãããŸããã
{spring|summer|autumn|winter}
ã®ããã«æå®ãããšããããããã©ã³ãã ã«éžæãããŸãã{2$$chocolate|vanilla|strawberry}
ã®ããã«æå®ãããšããããã2åãã©ã³ãã ã«éžæãããŸãã{1-2$$ and $$chocolate|vanilla|strawberry}
ã®ããã«æå®ãããšã1åã2åãã©ã³ãã ã«éžæããand
ã§æ¥ç¶ãããŸãã- åæ°ã®ã¬ã³ãžæå®ã§ã¯
0-2
ã®ããã«0åãæå®å¯èœã§ãã-2
ã1-
ã®ãããªçåŽã®çç¥ã¯ã§ããŸããã - ããã³ãããªãã·ã§ã³ã«å¯ŸããŠãæå®å¯èœã§ãã
{e$$chocolate|vanilla|strawberry}
ã®ããã«e
ãŸãã¯E
ãæå®ãããšããã¹ãŠã®åè£ãéžæããããã³ãããè€æ°åç¹°ãè¿ãããŸãïŒ--images_per_prompt
ã¯ç¡èŠãããŸãïŒãX/Y plotã®äœæã«äŸ¿å©ãããããŸããã--am {e$$0.2|0.4|0.6|0.8|1.0},{e$$0.4|0.7|1.0} --d 1234
ã®ãããªæå®ãå¯èœã§ãããã®å Žåã5*3ã§15åã®ããã³ãããçæãããŸãã- Weightingã®æ©èœã¯ãããŸããã
- Highres. Fixã®2nd stageã§ControlNetãç¡å¹åãããªãã·ã§ã³
- Fixed a bug where clip skip did not work when training with weighted captions (
--weighted_captions
specified) and when generating sample images during training. - éã¿ã¥ããã£ãã·ã§ã³ã§ã®åŠç¿æïŒ
--weighted_captions
æå®æïŒããã³åŠç¿äžã®ãµã³ãã«ç»åçææã«clip skipãæ©èœããªãäžå ·åãä¿®æ£ããŸããã
- Fix
train_network.py
to probably work with older versions of LyCORIS. gen_img_diffusers.py
now supportsBREAK
syntax.train_network.py
ãLyCORISã®ä»¥åã®ããŒãžã§ã³ã§ãæããåäœããããä¿®æ£ããŸãããgen_img_diffusers.py
ã§BREAK
æ§æããµããŒãããŸããã
-
Max Norm Regularization is now available in
train_network.py
. PR #545 Thanks to AI-Casanova!- Max Norm Regularization is a technique to stabilize network training by limiting the norm of network weights. It may be effective in suppressing overfitting of LoRA and improving stability when used with other LoRAs. See PR for details.
- Specify as
--scale_weight_norms=1.0
. It seems good to try from1.0
. - The networks other than LoRA in this repository (such as LyCORIS) do not support this option.
-
Three types of dropout have been added to
train_network.py
and LoRA network.- Dropout is a technique to suppress overfitting and improve network performance by randomly setting some of the network outputs to 0.
--network_dropout
is a normal dropout at the neuron level. In the case of LoRA, it is applied to the output of down. Proposed in PR #545 Thanks to AI-Casanova!--network_dropout=0.1
specifies the dropout probability to0.1
.- Note that the specification method is different from LyCORIS.
- For LoRA network,
--network_args
can specifyrank_dropout
to dropout each rank with specified probability. Alsomodule_dropout
can be specified to dropout each module with specified probability.- Specify as
--network_args "rank_dropout=0.2" "module_dropout=0.1"
.
- Specify as
--network_dropout
,rank_dropout
, andmodule_dropout
can be specified at the same time.- Values of 0.1 to 0.3 may be good to try. Values greater than 0.5 should not be specified.
rank_dropout
andmodule_dropout
are original techniques of this repository. Their effectiveness has not been verified yet.- The networks other than LoRA in this repository (such as LyCORIS) do not support these options.
-
Added an option
--scale_v_pred_loss_like_noise_pred
to scale v-prediction loss like noise prediction in each training script.- By scaling the loss according to the time step, the weights of global noise prediction and local noise prediction become the same, and the improvement of details may be expected.
- See this article by xrg for details (written in Japanese). Thanks to xrg for the great suggestion!
-
Max Norm Regularizationã
train_network.py
ã§äœ¿ããããã«ãªããŸãããPR #545 AI-Casanovaæ°ã«æè¬ããŸãã- Max Norm Regularizationã¯ããããã¯ãŒã¯ã®éã¿ã®ãã«ã ãå¶éããããšã§ããããã¯ãŒã¯ã®åŠç¿ãå®å®ãããææ³ã§ããLoRAã®éåŠç¿ã®æå¶ãä»ã®LoRAãšäœµçšããæã®å®å®æ§ã®åäžãæåŸ ã§ãããããããŸããã詳现ã¯PRãåç §ããŠãã ããã
--scale_weight_norms=1.0
ã®ããã«--scale_weight_norms
ã§æå®ããŠãã ããã1.0
ããè©Šããšè¯ãããã§ãã- LyCORISçãåœãªããžããªä»¥å€ã®ãããã¯ãŒã¯ã¯çŸæç¹ã§ã¯æªå¯Ÿå¿ã§ãã
-
train_network.py
ããã³LoRAã«èšäžçš®é¡ã®dropoutãè¿œå ããŸããã- dropoutã¯ãããã¯ãŒã¯ã®äžéšã®åºåãã©ã³ãã ã«0ã«ããããšã§ãéåŠç¿ã®æå¶ããããã¯ãŒã¯ã®æ§èœåäžçãå³ãææ³ã§ãã
--network_dropout
ã¯ãã¥ãŒãã³åäœã®éåžžã®dropoutã§ããLoRAã®å Žåãdownã®åºåã«å¯ŸããŠé©çšãããŸããPR #545 ã§ææ¡ãããŸãããAI-Casanovaæ°ã«æè¬ããŸãã--network_dropout=0.1
ãªã©ãšããããšã§ãdropoutã®ç¢ºçãæå®ã§ããŸãã- LyCORISãšã¯æå®æ¹æ³ãç°ãªããŸãã®ã§ã泚æãã ããã
- LoRAã®å Žåã
--network_args
ã«rank_dropout
ãæå®ããããšã§årankãæå®ç¢ºçã§dropoutããŸãããŸãåããLoRAã®å Žåã--network_args
ã«module_dropout
ãæå®ããããšã§åã¢ãžã¥ãŒã«ãæå®ç¢ºçã§dropoutããŸãã--network_args "rank_dropout=0.2" "module_dropout=0.1"
ã®ããã«æå®ããŸãã
--network_dropout
ãrank_dropout
ãmodule_dropout
ã¯åæã«æå®ã§ããŸãã- ããããã®å€ã¯0.1~0.3çšåºŠããè©ŠããŠã¿ããšè¯ããããããŸããã0.5ãè¶ ããå€ã¯æå®ããªãæ¹ãè¯ãã§ãããã
rank_dropout
ããã³module_dropout
ã¯åœãªããžããªç¬èªã®ææ³ã§ããæå¹æ§ã®æ€èšŒã¯ãŸã è¡ã£ãŠããŸããã- ãããã®dropoutã¯LyCORISçãåœãªããžããªä»¥å€ã®ãããã¯ãŒã¯ã¯çŸæç¹ã§ã¯æªå¯Ÿå¿ã§ãã
-
ååŠç¿ã¹ã¯ãªããã«v-prediction lossãnoise predictionãšåæ§ã®å€ã«ã¹ã±ãŒã«ãããªãã·ã§ã³
--scale_v_pred_loss_like_noise_pred
ãè¿œå ããŸããã- ã¿ã€ã ã¹ãããã«å¿ããŠlossãã¹ã±ãŒã«ããããšã§ã 倧åçãªãã€ãºã®äºæž¬ãšå±æçãªãã€ãºã®äºæž¬ã®éã¿ãåãã«ãªãããã£ããŒã«ã®æ¹åãæåŸ ã§ãããããããŸããã
- 詳现ã¯xrgæ°ã®ãã¡ãã®èšäºããåç §ãã ããïŒnoise_predictionã¢ãã«ãšv_predictionã¢ãã«ã®æ倱 - åŸé éäžå éå¹Žå± xrgæ°ã®çŽ æŽãããèšäºã«æè¬ããŸãã
-
Show warning when image caption file does not exist during training. PR #533 Thanks to TingTingin!
- Warning is also displayed when using class+identifier dataset. Please ignore if it is intended.
-
train_network.py
now supports merging network weights before training. PR #542 Thanks to u-haru!--base_weights
option specifies LoRA or other model files (multiple files are allowed) to merge.--base_weights_multiplier
option specifies multiplier of the weights to merge (multiple values are allowed). If omitted or less thanbase_weights
, 1.0 is used.- This is useful for incremental learning. See PR for details.
-
Show warning and continue training when uploading to HuggingFace fails.
-
åŠç¿æã«ç»åã®ãã£ãã·ã§ã³ãã¡ã€ã«ãååšããªãå ŽåãèŠåã衚瀺ãããããã«ãªããŸããã PR #533 TingTinginæ°ã«æè¬ããŸãã
- class+identifieræ¹åŒã®ããŒã¿ã»ãããå©çšããŠããå ŽåãèŠåã衚瀺ãããŸããæå³ããŠããéãã®å Žåã¯ç¡èŠããŠãã ããã
-
train_network.py
ã«åŠç¿åã«ã¢ãã«ã«networkã®éã¿ãããŒãžããæ©èœãè¿œå ãããŸããã PR #542 u-haruæ°ã«æè¬ããŸãã--base_weights
ãªãã·ã§ã³ã§LoRAçã®ã¢ãã«ãã¡ã€ã«ïŒè€æ°å¯ïŒãæå®ãããšããããã®éã¿ãããŒãžããŸãã--base_weights_multiplier
ãªãã·ã§ã³ã§ããŒãžããéã¿ã®åçïŒè€æ°å¯ïŒãæå®ã§ããŸããçç¥æãŸãã¯base_weights
ãããæ°ãå°ãªãå Žåã¯1.0ã«ãªããŸãã- å·®åè¿œå åŠç¿ãªã©ã«ãå©çšãã ããã詳现ã¯PRãã芧ãã ããã
-
HuggingFaceãžã®ã¢ããããŒãã«å€±æããå ŽåãèŠåã衚瀺ããã®ãŸãŸåŠç¿ãç¶è¡ããããå€æŽããŸããã
- D-Adaptation v3.0 is now supported. PR #530 Thanks to sdbds!
--optimizer_type
now acceptsDAdaptAdamPreprint
,DAdaptAdanIP
, andDAdaptLion
.DAdaptAdam
is now new. The oldDAdaptAdam
is available withDAdaptAdamPreprint
.- Simply specifying
DAdaptation
will useDAdaptAdamPreprint
(same behavior as before). - You need to install D-Adaptation v3.0. After activating venv, please do
pip install -U dadaptation
. - See PR and D-Adaptation documentation for details.
- D-Adaptation v3.0ããµããŒããããŸããã PR #530 sdbdsæ°ã«æè¬ããŸãã
--optimizer_type
ã«DAdaptAdamPreprint
ãDAdaptAdanIP
ãDAdaptLion
ãè¿œå ãããŸãããDAdaptAdam
ãæ°ãããªããŸãããä»ãŸã§ã®DAdaptAdam
ã¯DAdaptAdamPreprint
ã§äœ¿çšã§ããŸãã- åã«
DAdaptation
ãæå®ãããšDAdaptAdamPreprint
ã䜿çšãããŸãïŒä»ãŸã§ãšåãåãïŒã - D-Adaptation v3.0ã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ããvenvãæå¹ã«ããåŸ
pip install -U dadaptation
ãšããŠãã ããã - 詳现ã¯PRããã³D-Adaptationã®ããã¥ã¡ã³ããåç §ããŠãã ããã
- Fixed several bugs.
- The generation script now uses xformers for VAE as well.
- ããã€ãã®ãã°ä¿®æ£ãè¡ããŸããã
fine_tune.py
ãštrain_db.py
ã§--save_state
ãªãã·ã§ã³æªæå®æã«ãstateãä¿åãããã PR #521 akshaalæ°ã«æè¬ããŸããalpha
ãæããªãLoRAãèªã¿èŸŒããªããPR #527 Manjizæ°ã«æè¬ããŸãã- ãµã³ãã«çææã®ã³ã³ãœãŒã«åºåã®è»œåŸ®ãªå€æŽãPR #515 yanhuifairæ°ã«æè¬ããŸãã
- çæã¹ã¯ãªããã§VAEã«ã€ããŠãxformersã䜿ãããã«ããŸããã
- Fixed an issue where an error would occur if the encoding of the prompt file was different from the default. PR #510 Thanks to sdbds!
- Please save the prompt file in UTF-8.
- ããã³ãããã¡ã€ã«ã®ãšã³ã³ãŒãã£ã³ã°ãããã©ã«ããšç°ãªãå Žåã«ãšã©ãŒãçºçããåé¡ãä¿®æ£ããŸããã PR #510 sdbdsæ°ã«æè¬ããŸãã
- ããã³ãããã¡ã€ã«ã¯UTF-8ã§ä¿åããŠãã ããã
-
Added English translation of documents by darkstorm2150. Thank you very much!
-
The prompt for sample generation during training can now be specified in
.toml
or.json
. PR #504 Thanks to Linaqruf!- For details on prompt description, please see the PR.
-
darkstorm2150æ°ã«ããã¥ã¡ã³ãé¡ãè±èš³ããŠããã ããŸãããããããšãããããŸãïŒ
-
åŠç¿äžã®ãµã³ãã«çæã®ããã³ããã
.toml
ãŸãã¯.json
ã§æå®å¯èœã«ãªããŸããã PR #504 Linaqrufæ°ã«æè¬ããŸãã- ããã³ããèšè¿°ã®è©³çŽ°ã¯åœè©²PRãã芧ãã ããã
-
Added an option
--dim_from_weights
totrain_network.py
to automatically determine the dim(rank) from the weight file. PR #491 Thanks to AI-Casanova!- It is useful in combination with
resize_lora.py
. Please see the PR for details.
- It is useful in combination with
-
Fixed a bug where the noise resolution was incorrect with Multires noise. PR #489 Thanks to sdbds!
- Please see the PR for details.
-
The image generation scripts can now use img2img and highres fix at the same time.
-
Fixed a bug where the hint image of ControlNet was incorrectly BGR instead of RGB in the image generation scripts.
-
Added a feature to the image generation scripts to use the memory-efficient VAE.
- If you specify a number with the
--vae_slices
option, the memory-efficient VAE will be used. The maximum output size will be larger, but it will be slower. Please specify a value of about16
or32
. - The implementation of the VAE is in
library/slicing_vae.py
.
- If you specify a number with the
-
train_network.py
ã«dim(rank)ãéã¿ãã¡ã€ã«ããèªå決å®ãããªãã·ã§ã³--dim_from_weights
ãè¿œå ãããŸããã PR #491 AI-Casanovaæ°ã«æè¬ããŸããresize_lora.py
ãšçµã¿åããããšæçšã§ãã詳现ã¯PRããåç §ãã ããã
-
Multires noiseã§ãã€ãºè§£å床ãæ£ãããªãäžå ·åãä¿®æ£ãããŸããã PR #489 sdbdsæ°ã«æè¬ããŸãã
- 詳现ã¯åœè©²PRããåç §ãã ããã
-
çæã¹ã¯ãªããã§img2imgãšhighres fixãåæã«äœ¿çšã§ããããã«ããŸããã
-
çæã¹ã¯ãªããã§ControlNetã®hintç»åã誀ã£ãŠBGRã ã£ãã®ãRGBã«ä¿®æ£ããŸããã
-
çæã¹ã¯ãªããã§çã¡ã¢ãªåVAEã䜿ããããæ©èœè¿œå ããŸããã
--vae_slices
ãªãã·ã§ã³ã«æ°å€ãæå®ãããšãçã¡ã¢ãªåVAEãçšããŸããåºåå¯èœãªæ倧ãµã€ãºã倧ãããªããŸãããé ããªããŸãã16
ãŸãã¯32
çšåºŠã®å€ãæå®ããŠãã ããã- VAEã®å®è£
ã¯
library/slicing_vae.py
ã«ãããŸãã
-
The documentation has been moved to the
docs
folder. If you have links, please change them. -
Removed
gradio
fromrequirements.txt
. -
DAdaptAdaGrad, DAdaptAdan, and DAdaptSGD are now supported by DAdaptation. PR#455 Thanks to sdbds!
- DAdaptation needs to be installed. Also, depending on the optimizer, DAdaptation may need to be updated. Please update with
pip install --upgrade dadaptation
.
- DAdaptation needs to be installed. Also, depending on the optimizer, DAdaptation may need to be updated. Please update with
-
Added support for pre-calculation of LoRA weights in image generation scripts. Specify
--network_pre_calc
.- The prompt option
--am
is available. Also, it is disabled when Regional LoRA is used.
- The prompt option
-
Added Adaptive noise scale to each training script. Specify a number with
--adaptive_noise_scale
to enable it.- Experimental option. It may be removed or changed in the future.
- This is an original implementation that automatically adjusts the value of the noise offset according to the absolute value of the mean of each channel of the latents. It is expected that appropriate noise offsets will be set for bright and dark images, respectively.
- Specify it together with
--noise_offset
. - The actual value of the noise offset is calculated as
noise_offset + abs(mean(latents, dim=(2,3))) * adaptive_noise_scale
. Since the latent is close to a normal distribution, it may be a good idea to specify a value of about 1/10 to the same as the noise offset. - Negative values can also be specified, in which case the noise offset will be clipped to 0 or more.
-
Other minor fixes.
-
ããã¥ã¡ã³ãã
docs
ãã©ã«ãã«ç§»åããŸããããªã³ã¯çã匵ãããŠããå Žåã¯å€æŽããé¡ãããããŸãã -
requirements.txt
ããgradio
ãåé€ããŸããã -
DAdaptationã§æ°ããDAdaptAdaGradãDAdaptAdanãDAdaptSGDããµããŒããããŸãããPR#455 sdbdsæ°ã«æè¬ããŸãã
- dadaptationã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ãããŸããªããã£ãã€ã¶ã«ãã£ãŠã¯dadaptationã®æŽæ°ãå¿
èŠã§ãã
pip install --upgrade dadaptation
ã§æŽæ°ããŠãã ããã
- dadaptationã®ã€ã³ã¹ããŒã«ãå¿
èŠã§ãããŸããªããã£ãã€ã¶ã«ãã£ãŠã¯dadaptationã®æŽæ°ãå¿
èŠã§ãã
-
ç»åçæã¹ã¯ãªããã§LoRAã®éã¿ã®äºåèšç®ããµããŒãããŸããã
--network_pre_calc
ãæå®ããŠãã ããã- ããã³ãããªãã·ã§ã³ã®
--am
ãå©çšã§ããŸãããŸãRegional LoRA䜿çšæã«ã¯ç¡å¹ã«ãªããŸãã
- ããã³ãããªãã·ã§ã³ã®
-
ååŠç¿ã¹ã¯ãªããã«Adaptive noise scaleãè¿œå ããŸããã
--adaptive_noise_scale
ã§æ°å€ãæå®ãããšæå¹ã«ãªããŸãã- å®éšçãªãã·ã§ã³ã§ããå°æ¥çã«åé€ãä»æ§å€æŽãããå¯èœæ§ããããŸãã
- Noise offsetã®å€ããlatentsã®åãã£ãã«ã®å¹³åå€ã®çµ¶å¯Ÿå€ã«å¿ããŠèªå調æŽãããªãã·ã§ã³ã§ããç¬èªã®å®è£ ã§ãæããç»åãæãç»åã«å¯ŸããŠããããé©åãªnoise offsetãèšå®ãããããšãæåŸ ãããŸãã
--noise_offset
ãšåæã«æå®ããŠãã ããã- å®éã®Noise offsetã®å€ã¯
noise_offset + abs(mean(latents, dim=(2,3))) * adaptive_noise_scale
ã§èšç®ãããŸãã latentã¯æ£èŠååžã«è¿ãããnoise_offsetã®1/10ïœåçšåºŠã®å€ãæå®ãããšãããããããŸããã - è² ã®å€ãæå®ã§ãããã®å Žåã¯noise offsetã¯0以äžã«clipãããŸãã
-
ãã®ä»ã®çŽ°ããä¿®æ£ãè¡ããŸããã
Please read Releases for recent updates. æè¿ã®æŽæ°æ å ±ã¯ Release ãã芧ãã ããã
The LoRA supported by train_network.py
has been named to avoid confusion. The documentation has been updated. The following are the names of LoRA types in this repository.
-
LoRA-LierLa : (LoRA for Li n e a r La yers)
LoRA for Linear layers and Conv2d layers with 1x1 kernel
-
LoRA-C3Lier : (LoRA for C olutional layers with 3 x3 Kernel and Li n e a r layers)
In addition to 1., LoRA for Conv2d layers with 3x3 kernel
LoRA-LierLa is the default LoRA type for train_network.py
(without conv_dim
network arg). LoRA-LierLa can be used with our extension for AUTOMATIC1111's Web UI, or with the built-in LoRA feature of the Web UI.
To use LoRA-C3Lier with Web UI, please use our extension.
train_network.py
ããµããŒãããLoRAã«ã€ããŠãæ··ä¹±ãé¿ããããååãä»ããŸãããããã¥ã¡ã³ãã¯æŽæ°æžã¿ã§ãã以äžã¯åœãªããžããªå
ã®ç¬èªã®å称ã§ãã
-
LoRA-LierLa : (LoRA for Li n e a r La yersããªãšã©ãšèªã¿ãŸã)
Linear å±€ããã³ã«ãŒãã«ãµã€ãº 1x1 ã® Conv2d å±€ã«é©çšãããLoRA
-
LoRA-C3Lier : (LoRA for C olutional layers with 3 x3 Kernel and Li n e a r layersãã»ãªã¢ãšèªã¿ãŸã)
1.ã«å ããã«ãŒãã«ãµã€ãº 3x3 ã® Conv2d å±€ã«é©çšãããLoRA
LoRA-LierLa ã¯Web UIåãæ¡åŒµããŸãã¯AUTOMATIC1111æ°ã®Web UIã®LoRAæ©èœã§äœ¿çšããããšãã§ããŸãã
LoRA-C3Lierã䜿ãWeb UIã§çæããã«ã¯æ¡åŒµã䜿çšããŠãã ããã
A prompt file might look like this, for example
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
Lines beginning with #
are comments. You can specify options for the generated image with options like --n
after the prompt. The following can be used.
--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.
The prompt weighting such as ( )
and [ ]
are working.
ããã³ãããã¡ã€ã«ã¯äŸãã°ä»¥äžã®ããã«ãªããŸãã
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
#
ã§å§ãŸãè¡ã¯ã³ã¡ã³ãã«ãªããŸãã--n
ã®ããã«ããã€ãã³äºåïŒè±å°æåãã®åœ¢ã§ãªãã·ã§ã³ãæå®ã§ããŸãã以äžã䜿çšå¯èœã§ããŸãã
--n
Negative prompt up to the next option.--w
Specifies the width of the generated image.--h
Specifies the height of the generated image.--d
Specifies the seed of the generated image.--l
Specifies the CFG scale of the generated image.--s
Specifies the number of steps in the generation.
( )
ã [ ]
ãªã©ã®éã¿ã¥ããåäœããŸãã