Skip to content

Pytorch implementation of statsmodels.stats.multitest.multipletest for accelerated GPU training

License

Notifications You must be signed in to change notification settings

florianmahner/torch-multipletests

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python version example workflow codecov code style

torch_multipletests

Simple Pytorch implementation of statsmodels.stats.multitests.multipletests to control for False-Discovery-Rates and correct p values on GPU for accelerated training and evaluation.

The functionality is currently limited compared to the original implementation. Right now the implementation only supports correcting for multiple comparions using Bonferroni (one-step), Benjamini/Hochberg (non-negative) and Benjamini/Yekutieli (negative) methods. Feel free to contribute.

Installation

Execute the following lines to clone the repository and install the package using pip

git clone https://github.com/florianmahner/torch_multipletests.git
cd torch_multipletests
pip install -e .

Example Usage:

import torch
from torch_multipletests.multitest import multipletests

alpha = 0.05
method = 'bonferroni' # bonferroni correction 

# create synthetic p-values following the cdf of a gaussian. replace these with your own
loc, scale  = torch.randn(100), torch.randn(100).exp()
cut_off = torch.as_tensor(0)
pvals = torch.distributions.Normal(loc, scale).cdf(cut_off)

fdr_reject, pvals_corrected, alpha_bonferroni_correction = multipletest(pvals, alpha)

About

Pytorch implementation of statsmodels.stats.multitest.multipletest for accelerated GPU training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages