Optional img2img input for diffusion reconstruction #4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
updated my jupyter notebook with jimgoo’s latest contributions, also fixed problem with wandb executing for each gpu instead of just the master gpu (but only for my notebook, not jimgoo's _combo.py script)
utils.reconstruct_from_clip() is an alternative to utils.sample_images() that can take img2img reference input, as used in the Brain_to_Image scripts.
grid of images now has white background and legible plot titles
I kept the train_combined.py unchanged and didnt mess with it so as to preserve others workflows, although it would still need to be adapted to handle the new img2img feature
added rng seed to data loaders
allow user to specify num_train in get_dataloaders() via num_samples argument (useful if wanting to not use the full dataset, for testing purposes)
separated jimgoo’s main-conda.slurm from my main-singularity.slurm