Pipeline processing systems can provide a set of useful features:
- On-demand computing: in pipeline systems such as labView, or VTK calculations are performed as needed by the outputs and only when inputs change.
- Transparent parallelization: a pipeline topology can be inspected to deduce which operations can be run in parallel (it is equivalent to purely functional programming).
- Tracking of data and computations: to be able to fully reproduce a computational experiment: requires tracking of the data and operation implemented.
- Inspecting data flow: Inspecting intermediate results helps debugging and understanding.
But pipeline frameworks can get in the way
We want our code to look like the underlying algorithm, not like a software framework.
Functions are the simplest abstraction used by everyone. Our pipeline jobs (or tasks) are made of decorated functions.
Tracking of parameters in a meaningful way requires specification of data model. We give up on that and use hashing for performance and robustness.
- No dependencies other than Python
- Robust, well-tested code, at the cost of functionnality
- Fast and suitable for scientific computing on big dataset without changing the original code
- Only local imports: embed joblib in your code by copying it