Ablation studies are experiments used to identify the causal effects on a method performance. Method is a meta-model
from a Bayesian point of view, where the model which we evaluate include the model parameters as well as meta-training arguments, such as Optimizer. In plain english, the causal effects we study encompass both the PyTorch model and the training configuration.
Ablators are materials that are depleted during operation (NASA). An experimental ABLATOR should not interfere with the experimental result.
- Strictly typed configuration system prevents errors.
- Seamless prototyping to production
- Stateful experiment design. Stop, Resume, Share your experiments
- Automated analysis artifacts
- Template Training
Comparison table with existing framework:
Framework | HPO | Configuration | Training | Tuning | Analysis |
---|---|---|---|---|---|
Ray | ✅ | ❌ | ❌ | ✅ | ❌ |
Lighting | ❌ | ❌ | ✅ | ❌ | ❌ |
Optuna | ✅ | ❌ | ❌ | ❌ | ✅ |
Hydra | ❌ | ✅ | ❌ | ❌ | ❌ |
ABLATOR | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
Features compared, hyperparameter selection (HPO
), removing boilerplate code for configuring experiments (Configuration
), removing boiler plate code for running experiments at scale (Tuning
) and performing analysis on the hyperparameter selection (Analysis
).
In summary, you will need to integrate different tools, for distributed execution, fault tollerance, training, checkpointing and analysis. Poor compatibility between tools, verisioning errors will lead to errors in your analysis.
You can use ABLATOR with any other library i.e. PyTorch Lighting. Just wrap a Lighting model with ModelWrapper
Spend more time in the creative process of ML research and less time on dev-ops.
The library is under active development and a lot of the API endpoints will be removed / renamed or their functionality changed without notice.
Use a python virtual enviroment to avoid version conflicts.
git clone [email protected]:fostiropoulos/ablator.git
cd ablator
pip install .
For Development
pip install .[dev]