Skip to content

giussepi/gtorch_utils

Repository files navigation

gtorch_utils

Some useful pytorch snippets

Installation

This version has been tested with Pytorch 1.10.0, but using it with Pytorch<2.0 should be safe.

  1. Install Pytorch 1.10.0 folowing the instructions provided on the page pytorch.org/get-started/previous-versions/#v1100.

  2. Install OpenSlide.

Package installation

  1. Install it by:

    Adding it to your requirements file:

    # use the latest version
    
    gtorch_utils @ https://github.com/giussepi/gtorch_utils/tarball/main
    
    # or use a specific release (format 1)
    
    gutils @ https://github.com/giussepi/gtorch_utils/archive/refs/tags/v0.1.0.tar.gz
    
    # or use a specific release (format 2)
    
    gutils @ git+https://github.com/giussepi/[email protected]
    

    Or installing it directly:

    pip install git+git://github.com/giussepi/gtorch_utils.git --use-feature=2020-resolver --no-cache-dir
    
    # or
    
    pip install https://github.com/giussepi/gtorch_utils/tarball/main --use-feature=2020-resolver --no-cache-dir
  2. If you want to modify some default configuration values (e.g. for CT-82 and LiTS17 processing). Copy and the content from settings.py.template into your project settings.py and update it appropriately (especially PROJECT_PATH, CT82_SAVING_PATH, LITS17_SAVING_PATH and LITS17_CONFIG)

Development installation

  1. Clone this repository

  2. If you want to modify some default configuration values (e.g. for CT-82 and LiTS17 processing). Make a copy of the configuration file, review it thoroughly and update it properly (especially PROJECT_PATH, CT82_SAVING_PATH, LITS17_SAVING_PATH and LITS17_CONFIG)

    cp settings.py.template settings.py
  3. Modify or add new modules/features with their respective tests

  4. Get the test datasets by running

    chmod +x get_test_datasets.sh
    ./get_test_datasets.sh
  5. Execute all the tests

    chmod +x run_tests.sh
    ./run_tests.sh
  6. If all the tests pass, commit your changes

A few of our tests employs two cases from the NIH-TCIA CT Pancreas benchmark (CT-82) 1 2 3

Tools available

gtorch_utils/constants

  • DB
  • EPSILON

gtorch_utils/datasets/generic

  • BaseDataset

gtorch_utils/datasets/labels

  • DatasetLabelsMixin
  • Detail

gtorch_utils/datasets/segmentation/

  • BasicDataset
  • DatasetTemplate
  • HDF5Dataset

gtorch_utils/datasets/segmentation/datasets/

  • BrainTumorDataset
  • CarvanaDataset
  • CT82Dataset [review whole module for more functionalities]
  • LiTS17Dataset, LiTS17CropDataset [review whole module for more functionalities]
  • OnlineCoNSePDataset, OfflineCoNSePDataset, SeedWorker [review whole module for more functionalities]

gtorch_utils/nns/layers/regularizers

  • GaussianNoise

gtorch_utils/nns/managers

  • ADSVModelMGR
  • ModelMGR

gtorch_utils/nns/managers/callbacks

  • Checkpoint
  • EarlyStopping
  • MaskPlotter
  • MemoryPrint
  • MetricEvaluator
  • PlotTensorBoard
  • TrainingPlotter

gtorch_utils/nns/managers/classification

  • BasicModelMGR

gtorch_utils/nns/managers/exceptions

  • ModelMGRImageChannelsError

gtorch_utils/nns/mixins/checkpoints

  • CheckPointMixin

gtorch_utils/nns/mixins/constants

  • LrShedulerTrack

gtorch_utils/nns/mixins/subdatasets

  • SubDatasetsMixin

gtorch_utils/nns/mixins/data_loggers

  • DataLoggerMixin
  • DADataLoggerMixin

gtorch_utils/nns/mixins/exceptions

  • IniCheckpintError

gtorch_utils/nns/mixins/images_types

  • CT3DNIfTIMixin

gtorch_utils/nns/mixins/managers

  • adsv
    • ADSVModelMGRMixin
  • standard
    • ModelMGRMixin

gtorch_utils/nns/mixins/sanity_checks

  • SanityChecksMixin
  • WeightsChangingSanityChecksMixin

gtorch_utils/nns/mixins/torchmetrics

  • TorchMetricsMixin
  • DATorchMetricsMixin
  • ModularTorchMetricsMixin

gtorch_utils/nns/models/classification

  • Perceptron
  • MLP

gtorch_utils/nns/models/backbones

  • ResNet, resnet18, resnet34, resnet50, resnet101, resnet152
  • Xception, xception

gtorch_utils/nns/models/mixins

  • InitMixin

gtorch_utils/nns/models/segmentation

  • Deeplabv3plus
  • UNet
  • UNet_3Plus
  • UNet_3Plus_DeepSup
  • UNet_3Plus_DeepSup_CGM

gtorch_utils/nns/models/segmentation/unet/unet_parts.py

  • DoubleConv
  • XConv
  • Down
  • MicroAE
  • TinyAE
  • TinyUpAE
  • MicroUpAE
  • AEDown
  • AEDown2
  • Up
  • OutConv
  • UpConcat
  • AEUpConcat
  • AEUpConcat2
  • UnetDsv
  • UnetGridGatingSignal

gtorch_utils/nns/utils

  • MetricItem
  • Normalizer
  • Reproducibility
  • sync_batchnorm
    • SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d
    • DataParallelWithCallback, patch_replication_callback
    • get_batchnormxd_class

gtorch_utils/segmentation/confusion_matrix

  • ConfusionMatrixMGR

gtorch_utils/segmentation/loss_functions

  • bce_dice_loss_, bce_dice_loss (fastest), BceDiceLoss (support for logits)
  • dice_coef_loss
  • FocalLoss
  • FPPV_Loss
  • FPR_Loss
  • IOU_Loss<torch.nn.Module>, IOU_loss
  • lovasz_hinge, lovasz_softmax
  • MCC_Loss, MCCLoss
  • MSSSIM_Loss
  • NPV_Loss
  • Recall_Loss
  • SpecificityLoss
  • TverskyLoss

gtorch_utils/segmentation/metrics

  • DiceCoeff (individual samples), dice_coeff (batches), dice_coeff_metric (batches, fastest implementation)
  • fppv
  • fpr
  • IOU
  • MSSSIM
  • npv
  • recall
  • RNPV
  • Specificity

gtorch_utils/segmentation/torchmetrics

  • Accuracy
  • BalancedAccuracy
  • Recall
  • Specificity
  • DiceCoefficient, DiceCoefficientPerImage

gtorch_utils/segmentation/visualisation

  • plot_img_and_mask

gtorch_utils/utils/images

  • apply_padding

Usage

All the classes and functions are fully document so explore the modules, load snippets and have fun! 😊:bowtie:πŸ€“ E.g.:

from collections import OrderedDict

import torch.optim as optim
from gtorch_utils.constants import DB
from gtorch_utils.nns.managers.classification import  BasicModelMGR
from gtorch_utils.nns.models.classification import Perceptron

# Minimum example, see all the BasicModelMGR options in its class definition at gtorch_utils/models/managers.py.

# GenericDataset is subclass of gtorch_utils.datasets.generic.BaseDataset that you must implement
# to handle your dataset. You can pass argument to your class using dataset_kwargs


BasicModelMGR(
    model=Perceptron(3000, 102),
    dataset=GenericDataset,
    dataset_kwargs=dict(dbhandler=DBhandler, normalizer=Normalizer.MAX_NORM, val_size=.1),
    epochs=200
)()

Plot train and validation loss to TensorBoard

Just pass to BasicModelMGR the keywrod argument tensorboard=True and execute:

./run_tensorboard.sh

Note: If you installed this app as a package then you may want to copy the run_tensorboard.sh script to your project root or just run tensorboard --logdir=runs every time you want to see your training results on the TensorBoard interface. To do so, just open localhost:6006 on your browser.

TODO

  • Make it compatible with torch 2.0
  • Write tests for MemoryPrint
  • Implement double cross validation
  • Implement cross validation
  • Write more tests
  • Save & load checkpoint
  • Early stopping callback
  • Plot train & val loss in tensorboard

Footnotes

  1. Holger R. Roth, Amal Farag, Evrim B. Turkbey, Le Lu, Jiamin Liu, and Ronald M. Summers. (2016). Data From Pancreas-CT. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2016.tNB1kqBU ↩

  2. Roth HR, Lu L, Farag A, Shin H-C, Liu J, Turkbey EB, Summers RM. DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation. N. Navab et al. (Eds.): MICCAI 2015, Part I, LNCS 9349, pp. 556–564, 2015. (paper) ↩

  3. Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057. DOI: https://doi.org/10.1007/s10278-013-9622-7 ↩

About

PyTorch common workflows and utilities

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages