Skip to content

Commit

Permalink
Add DreamPrompt validation (carson-katri#530)
Browse files Browse the repository at this point in the history
* Add DreamPrompt validation and better pipeline caching

* Cleanup comments

* Add warning before installing dependencies

* Check for wandb/k_diffusion and add operator to uninstall them

* Fix minimum Blender version requirement

* Add validation to projection

* Change pipeline display behavior

* Support depth and inpainting import

* Improve error message on conversion failure

* Add development environment setup guide

* Link to DEVELOPMENT_ENVIRONMENT.md from the README

* Update Linux manual install (assuming installed 3.4 via snap)

* Group models by type

* Fix description

* Add missing DreamStudio models

* Improve schedulers

* Re-add CLIPSeg support

* Add fix it for using the RENDER_RESULT as an init_img

* Fix error reporting

* Add doc about ensurepip

* Revise fit handling

* Improve code quality for fit handling

* Apply `load_pipe` suggestions

Co-authored-by: NullSenseStudio <[email protected]>

* Update to use class instead of tuple

* Check all revisions for a match

* Update generator_process/actions/prompt_to_image.py

Co-authored-by: NullSenseStudio <[email protected]>

* Fix missed conflict

* Fix negative prompt

---------

Co-authored-by: NullSenseStudio <[email protected]>
  • Loading branch information
carson-katri and NullSenseStudio authored Feb 3, 2023
1 parent 76a363b commit 3f096ca
Show file tree
Hide file tree
Showing 28 changed files with 691 additions and 358 deletions.
21 changes: 1 addition & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,26 +65,7 @@ If you have an issue with a supported GPU, please create an issue.
If your hardware is unsupported, you can use DreamStudio to process in the cloud. Follow the instructions in the release notes to setup with DreamStudio.

# Contributing
After cloning the repository, there a few more steps you need to complete to setup your development environment:
1. Install submodules:
```sh
git submodule update --init --recursive
```
2. I recommend the [Blender Development](https://marketplace.visualstudio.com/items?itemName=JacquesLucke.blender-development) extension for VS Code for debugging. If you just want to install manually though, you can put the `dream_textures` repo folder in Blender's addon directory.
3. After running the local add-on in Blender, setup the model weights like normal.
4. Install dependencies locally
* Open Blender's preferences window
* Enable *Interface* > *Display* > *Developer Extras*
* Then install dependencies for development under *Add-ons* > *Dream Textures* > *Development Tools*
* This will download all pip dependencies for the selected platform into `.python_dependencies`


### macOS

1. On Apple Silicon, with the `requirements-dream-studio.txt` you may run into an error with gRPC using an incompatible binary. If so, please use the following command to install the correct gRPC version:
```sh
pip install --no-binary :all: grpcio --ignore-installed --target .python_dependencies --upgrade
```
For detailed instructions on installing from source, see the guide on [setting up a development environment](./docs/DEVELOPMENT_ENVIRONMENT.md).

# Troubleshooting

Expand Down
2 changes: 1 addition & 1 deletion __init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"name": "Dream Textures",
"author": "Dream Textures contributors",
"description": "Use Stable Diffusion to generate unique textures straight from the shader editor.",
"blender": (3, 0, 0),
"blender": (3, 1, 0),
"version": (0, 0, 9),
"location": "Image Editor -> Sidebar -> Dream",
"category": "Paint"
Expand Down
15 changes: 7 additions & 8 deletions classes.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
from .operators.install_dependencies import InstallDependencies
from .operators.install_dependencies import InstallDependencies, UninstallDependencies
from .operators.open_latest_version import OpenLatestVersion
from .operators.dream_texture import DreamTexture, ReleaseGenerator, CancelGenerator
from .operators.view_history import SCENE_UL_HistoryList, RecallHistoryEntry, ClearHistory, RemoveHistorySelection, ExportHistorySelection, ImportPromptFile
from .operators.inpaint_area_brush import InpaintAreaBrushActivated
from .operators.upscale import Upscale
from .operators.project import ProjectDreamTexture, dream_texture_projection_panels
from .operators.notify_result import NotifyResult
from .property_groups.dream_prompt import DreamPrompt
from .property_groups.seamless_result import SeamlessResult
from .ui.panels import dream_texture, history, upscaling, render_properties
from .preferences import OpenHuggingFace, OpenContributors, StableDiffusionPreferences, OpenDreamStudio, ImportWeights, Model, ModelSearch, InstallModel, PREFERENCES_UL_ModelList
from .preferences import OpenURL, StableDiffusionPreferences, ImportWeights, Model, ModelSearch, InstallModel, PREFERENCES_UL_ModelList

from .ui.presets import DREAM_PT_AdvancedPresets, DREAM_MT_AdvancedPresets, AddAdvancedPreset, RestoreDefaultPresets

Expand All @@ -32,15 +33,14 @@
DREAM_PT_AdvancedPresets,
DREAM_MT_AdvancedPresets,
AddAdvancedPreset,

NotifyResult,

# The order these are registered in matters
*dream_texture.dream_texture_panels(),
*upscaling.upscaling_panels(),
*history.history_panels(),
*dream_texture_projection_panels(),

dream_texture.OpenClipSegDownload,
dream_texture.OpenClipSegWeightsDirectory,
)

PREFERENCE_CLASSES = (
Expand All @@ -50,10 +50,9 @@
Model,
DreamPrompt,
SeamlessResult,
UninstallDependencies,
InstallDependencies,
OpenHuggingFace,
OpenURL,
ImportWeights,
OpenContributors,
RestoreDefaultPresets,
OpenDreamStudio,
StableDiffusionPreferences)
111 changes: 111 additions & 0 deletions docs/DEVELOPMENT_ENVIRONMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# Setting Up a Development Environment

With the following steps, you can start contributing to Dream Textures.

These steps can also be used to setup the add-on on Linux.

## Cloning

A basic knowledge of Git will be necessary to contribute. To start, clone the repository:

```sh
git clone https://github.com/carson-katri/dream-textures.git dream_textures
```

> If you use SSH, clone with `git clone [email protected]:carson-katri/dream-textures.git dream_textures`
This will clone the repository into the `dream_textures` folder.

## Installing to Blender

You can install the add-on to Blender in multiple ways. The easiest way is to copy the folder into the add-ons directory.

This directory is in different places on different systems.

* Windows
* `%USERPROFILE%\AppData\Roaming\Blender Foundation\Blender\3.4\scripts\addons`
* macOS
* `/Users/$USER/Library/Application Support/Blender/3.4/scripts/addons`
* Linux
* `$HOME/.config/blender/3.4/scripts/addons`

> This path may be different depending on how you installed Blender. See [Blender's documentation](https://docs.blender.org/manual/en/latest/advanced/blender_directory_layout.html) for more information on the directory layout.
If you can't find the add-on folder, you can look at another third-party add-on you already have in Blender preferences and see where it is located.

![A screenshot highlighting the add-on directory in Blender preferences](assets/development_environment/locating_addons.png)

### Using Visual Studio Code

> This is not necessary if you won't be making any changes to Dream Textures or prefer a different IDE.
You can also install and debug the add-on with the [Blender Development]() extension for Visual Studio Code.

Open the `dream_textures` folder in VS Code, open the command palette (Windows: <kbd>Shift</kbd> + <kbd>Ctrl</kbd> + <kbd>P</kbd>, macOS: <kbd>Shift</kbd> + <kbd>Command</kbd> + <kbd>P</kbd>), and search for the command `Blender: Start`.

![](assets/development_environment/command_palette.png)

Then choose which Blender installation to use.

![](assets/development_environment/choose_installation.png)

Blender will now start up with the add-on installed. You can verify this by going to Blender's preferences and searching for *Dream Textures*.

## Installing Dependencies

When installing from source, the dependencies are not included. You can install them from Blender's preferences.

First, enable *Developer Extras* so Dream Textures' developer tools will be displayed.

![](assets/development_environment/developer_extras.png)

Then, use the *Developer Tools* section to install the dependencies.

![](assets/development_environment/install_dependencies.png)

### Installing Dependencies Manually

In some cases, the *Install Dependencies* tool may not work. In this case, you can install the dependencies from the command line.

The best way to install dependencies is using the Python that ships with Blender. The command will differ depending on your operating system and Blender installation.

On some platforms, Blender does not come with `pip` pre-installed. You can use `ensurepip` to install it if necessary.

```sh
# Windows
"C:\Program Files\Blender Foundation\Blender 3.4\3.4\python\bin\python.exe" -m ensurepip

# macOS
/Applications/Blender.app/Contents/Resources/3.4/python/bin/python3.10 -m ensurepip

# Linux (via snap)
/snap/blender/3132/3.4/python/bin/python3.10 -m ensurepip
```

Once you have `pip`, the dependencies can be installed.

All of the packages *must* be installed to `dream_textures/.python_dependencies`. The following commands assume they are being run from inside the `dream_textures` folder.

```sh
# Windows
"C:\Program Files\Blender Foundation\Blender 3.4\3.4\python\bin\python.exe" -m pip install -r requirements/win-linux-cuda.txt --target .python_dependencies

# macOS
/Applications/Blender.app/Contents/Resources/3.4/python/bin/python3.10 -m pip install -r requirements/mac-mps-cpu.txt --target .python_dependencies

# Linux (via snap)
/snap/blender/3132/3.4/python/bin/python3.10 -m pip install -r requirements/win-linux-cuda.txt --target .python_dependencies
```

## Using the Add-on

Once you have the dependencies installed, the add-on will become fully usable. Continue setting up as described in the [setup guide](./SETUP.md).

## Common Issues

### macOS

1. On Apple Silicon, with the `requirements-dream-studio.txt` you may run into an error with gRPC using an incompatible binary. If so, please use the following command to install the correct gRPC version:
```sh
pip install --no-binary :all: grpcio --ignore-installed --target .python_dependencies --upgrade
```
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ class ModelConfig(enum.Enum):
STABLE_DIFFUSION_1 = "v1"
STABLE_DIFFUSION_2_BASE = "v2 (512, epsilon)"
STABLE_DIFFUSION_2 = "v2 (768, v_prediction)"
STABLE_DIFFUSION_2_DEPTH = "v2 (depth)"
STABLE_DIFFUSION_2_INPAINTING = "v2 (inpainting)"

@property
def original_config(self):
Expand All @@ -15,6 +17,10 @@ def original_config(self):
return {'model': {'base_learning_rate': 0.0001, 'target': 'ldm.models.diffusion.ddpm.LatentDiffusion', 'params': {'linear_start': 0.00085, 'linear_end': 0.012, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'jpg', 'cond_stage_key': 'txt', 'image_size': 64, 'channels': 4, 'cond_stage_trainable': False, 'conditioning_key': 'crossattn', 'monitor': 'val/loss_simple_ema', 'scale_factor': 0.18215, 'use_ema': False, 'unet_config': {'target': 'ldm.modules.diffusionmodules.openaimodel.UNetModel', 'params': {'use_checkpoint': True, 'use_fp16': True, 'image_size': 32, 'in_channels': 4, 'out_channels': 4, 'model_channels': 320, 'attention_resolutions': [4, 2, 1], 'num_res_blocks': 2, 'channel_mult': [1, 2, 4, 4], 'num_head_channels': 64, 'use_spatial_transformer': True, 'use_linear_in_transformer': True, 'transformer_depth': 1, 'context_dim': 1024, 'legacy': False}}, 'first_stage_config': {'target': 'ldm.models.autoencoder.AutoencoderKL', 'params': {'embed_dim': 4, 'monitor': 'val/rec_loss', 'ddconfig': {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}, 'lossconfig': {'target': 'torch.nn.Identity'}}}, 'cond_stage_config': {'target': 'ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder', 'params': {'freeze': True, 'layer': 'penultimate'}}}}}
case ModelConfig.STABLE_DIFFUSION_2:
return {'model': {'base_learning_rate': 0.0001, 'target': 'ldm.models.diffusion.ddpm.LatentDiffusion', 'params': {'parameterization': 'v', 'linear_start': 0.00085, 'linear_end': 0.012, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'jpg', 'cond_stage_key': 'txt', 'image_size': 64, 'channels': 4, 'cond_stage_trainable': False, 'conditioning_key': 'crossattn', 'monitor': 'val/loss_simple_ema', 'scale_factor': 0.18215, 'use_ema': False, 'unet_config': {'target': 'ldm.modules.diffusionmodules.openaimodel.UNetModel', 'params': {'use_checkpoint': True, 'use_fp16': True, 'image_size': 32, 'in_channels': 4, 'out_channels': 4, 'model_channels': 320, 'attention_resolutions': [4, 2, 1], 'num_res_blocks': 2, 'channel_mult': [1, 2, 4, 4], 'num_head_channels': 64, 'use_spatial_transformer': True, 'use_linear_in_transformer': True, 'transformer_depth': 1, 'context_dim': 1024, 'legacy': False}}, 'first_stage_config': {'target': 'ldm.models.autoencoder.AutoencoderKL', 'params': {'embed_dim': 4, 'monitor': 'val/rec_loss', 'ddconfig': {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}, 'lossconfig': {'target': 'torch.nn.Identity'}}}, 'cond_stage_config': {'target': 'ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder', 'params': {'freeze': True, 'layer': 'penultimate'}}}}}
case ModelConfig.STABLE_DIFFUSION_2_DEPTH:
return {'model': {'base_learning_rate': 5e-07, 'target': 'ldm.models.diffusion.ddpm.LatentDepth2ImageDiffusion', 'params': {'linear_start': 0.00085, 'linear_end': 0.012, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'jpg', 'cond_stage_key': 'txt', 'image_size': 64, 'channels': 4, 'cond_stage_trainable': False, 'conditioning_key': 'hybrid', 'scale_factor': 0.18215, 'monitor': 'val/loss_simple_ema', 'finetune_keys': None, 'use_ema': False, 'depth_stage_config': {'target': 'ldm.modules.midas.api.MiDaSInference', 'params': {'model_type': 'dpt_hybrid'}}, 'unet_config': {'target': 'ldm.modules.diffusionmodules.openaimodel.UNetModel', 'params': {'use_checkpoint': True, 'image_size': 32, 'in_channels': 5, 'out_channels': 4, 'model_channels': 320, 'attention_resolutions': [4, 2, 1], 'num_res_blocks': 2, 'channel_mult': [1, 2, 4, 4], 'num_head_channels': 64, 'use_spatial_transformer': True, 'use_linear_in_transformer': True, 'transformer_depth': 1, 'context_dim': 1024, 'legacy': False}}, 'first_stage_config': {'target': 'ldm.models.autoencoder.AutoencoderKL', 'params': {'embed_dim': 4, 'monitor': 'val/rec_loss', 'ddconfig': {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}, 'lossconfig': {'target': 'torch.nn.Identity'}}}, 'cond_stage_config': {'target': 'ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder', 'params': {'freeze': True, 'layer': 'penultimate'}}}}}
case ModelConfig.STABLE_DIFFUSION_2_INPAINTING:
return {'model': {'base_learning_rate': 5e-05, 'target': 'ldm.models.diffusion.ddpm.LatentInpaintDiffusion', 'params': {'linear_start': 0.00085, 'linear_end': 0.012, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'jpg', 'cond_stage_key': 'txt', 'image_size': 64, 'channels': 4, 'cond_stage_trainable': False, 'conditioning_key': 'hybrid', 'scale_factor': 0.18215, 'monitor': 'val/loss_simple_ema', 'finetune_keys': None, 'use_ema': False, 'unet_config': {'target': 'ldm.modules.diffusionmodules.openaimodel.UNetModel', 'params': {'use_checkpoint': True, 'image_size': 32, 'in_channels': 9, 'out_channels': 4, 'model_channels': 320, 'attention_resolutions': [4, 2, 1], 'num_res_blocks': 2, 'channel_mult': [1, 2, 4, 4], 'num_head_channels': 64, 'use_spatial_transformer': True, 'use_linear_in_transformer': True, 'transformer_depth': 1, 'context_dim': 1024, 'legacy': False}}, 'first_stage_config': {'target': 'ldm.models.autoencoder.AutoencoderKL', 'params': {'embed_dim': 4, 'monitor': 'val/rec_loss', 'ddconfig': {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}, 'lossconfig': {'target': 'torch.nn.Identity'}}}, 'cond_stage_config': {'target': 'ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder', 'params': {'freeze': True, 'layer': 'penultimate'}}}}, 'data': {'target': 'ldm.data.laion.WebDataModuleFromConfig', 'params': {'tar_base': None, 'p_unsafe_threshold': 0.1, 'filter_word_list': 'data/filters.yaml', 'max_pwatermark': 0.45, 'batch_size': 8, 'num_workers': 6, 'multinode': True, 'min_size': 512, 'train': {'shards': ['pipe:aws s3 cp s3://stability-aws/laion-a-native/part-0/{00000..18699}.tar -', 'pipe:aws s3 cp s3://stability-aws/laion-a-native/part-1/{00000..18699}.tar -', 'pipe:aws s3 cp s3://stability-aws/laion-a-native/part-2/{00000..18699}.tar -', 'pipe:aws s3 cp s3://stability-aws/laion-a-native/part-3/{00000..18699}.tar -', 'pipe:aws s3 cp s3://stability-aws/laion-a-native/part-4/{00000..18699}.tar -'], 'shuffle': 10000, 'image_key': 'jpg', 'image_transforms': [{'target': 'torchvision.transforms.Resize', 'params': {'size': 512, 'interpolation': 3}}, {'target': 'torchvision.transforms.RandomCrop', 'params': {'size': 512}}], 'postprocess': {'target': 'ldm.data.laion.AddMask', 'params': {'mode': '512train-large', 'p_drop': 0.25}}}, 'validation': {'shards': ['pipe:aws s3 cp s3://deep-floyd-s3/datasets/laion_cleaned-part5/{93001..94333}.tar - '], 'shuffle': 0, 'image_key': 'jpg', 'image_transforms': [{'target': 'torchvision.transforms.Resize', 'params': {'size': 512, 'interpolation': 3}}, {'target': 'torchvision.transforms.CenterCrop', 'params': {'size': 512}}], 'postprocess': {'target': 'ldm.data.laion.AddMask', 'params': {'mode': '512train-large', 'p_drop': 0.25}}}}}, 'lightning': {'find_unused_parameters': True, 'modelcheckpoint': {'params': {'every_n_train_steps': 5000}}, 'callbacks': {'metrics_over_trainsteps_checkpoint': {'params': {'every_n_train_steps': 10000}}, 'image_logger': {'target': 'main.ImageLogger', 'params': {'enable_autocast': False, 'disabled': False, 'batch_frequency': 1000, 'max_images': 4, 'increase_log_steps': False, 'log_first_step': False, 'log_images_kwargs': {'use_ema_scope': False, 'inpaint': False, 'plot_progressive_rows': False, 'plot_diffusion_rows': False, 'N': 4, 'unconditional_guidance_scale': 5.0, 'unconditional_guidance_label': [''], 'ddim_steps': 50, 'ddim_eta': 0.0}}}}, 'trainer': {'benchmark': True, 'val_check_interval': 5000000, 'num_sanity_val_steps': 0, 'accumulate_grad_batches': 1}}}

def convert_original_stable_diffusion_to_diffusers(
self,
Expand Down
Loading

0 comments on commit 3f096ca

Please sign in to comment.