Website • Docs • Blog • Twitter • Chat (Community & Support) • Tutorial • Mailing List
Data Version Control or DVC is an open-source tool for data science and machine learning projects. Key features:
- simple command line Git-like experience. Does not require installing and maintaining any databases. Does not depend on any proprietary online services;
- it manages and versions datasets and machine learning models. Data is saved in S3, Google cloud, Azure, Alibaba cloud, SSH server, HDFS or even local HDD RAID;
- it makes projects reproducible and shareable, it helps answering question how the model was build;
- it helps manage experiments with Git tags or branches and metrics tracking;
DVC aims to replace tools like Excel and Google Docs that are being commonly used as a knowledge repo and a ledger for the team, ad-hoc scripts to track and move deploy different model versions, ad-hoc data file suffixes and prefixes.
Contents
We encourage you to read our Get Started to better understand what DVC is and how does it fit your scenarios.
The easiest (but not perfect!) analogy to describe it: DVC is Git (or Git-lfs to be precise) + makefiles
made right and tailored specifically for ML and Data Science scenarios.
Git/Git-lfs
part - DVC helps you storing and sharing data artifacts, models. It connects them with your Git repository.Makefiles
part - DVC describes how one data or model artifact was build from another data.
DVC usually runs along with Git. Git is used as usual to store and version code and DVC meta-files. DVC helps to store data and model files seamlessly out of Git while preserving almost the same user experience as if they were stored in Git itself. To store and share data files cache DVC supports remotes - any cloud (S3, Azure, Google Cloud, etc) or any on-premise network storage (via SSH, for example).
DVC pipelines (aka computational graph) feature connects code and data together. In a very explicit way you can specify, run, and save information that a certain command with certain dependencies needs to be run to produce a model. See the quick start section below or check Get Started tutorial to learn more.
Please read Get Started for the full version. Common workflow commands include:
Step | Command |
---|---|
Track data | $ git add train.py $ dvc add images.zip |
Connect code and data by commands | $ dvc run -d images.zip -o images/ unzip -q images.zip $ dvc run -d images/ -d train.py -o model.p python train.py |
Make changes and reproduce | $ vi train.py $ dvc repro model.p.dvc |
Share code | $ git add . $ git commit -m 'The baseline model' $ git push |
Share data and ML models | $ dvc remote add myremote -d s3://mybucket/image_cnn $ dvc push |
Read this instruction to get more details. There are four
options to install DVC: pip
, Homebrew, Conda (Anaconda) or an OS-specific package:
pip install dvc
Depending on the remote storage type you plan to use to keep and share your data, you might need to specify
one of the optional dependencies: s3
, gs
, azure
, oss
, ssh
. Or all
to include them all.
The command should look like this: pip install dvc[s3]
- it installs the boto3
library along with
DVC to support the AWS S3 storage.
To install the development version, run:
pip install git+git://github.com/iterative/dvc
brew install dvc
conda install -c conda-forge dvc
Currently, it supports only python version 2.7, 3.6 and 3.7.
Self-contained packages for Windows, Linux, Mac are available. The latest version of the packages can be found at GitHub releases page.
sudo wget https://dvc.org/deb/dvc.list -O /etc/apt/sources.list.d/dvc.list
sudo apt-get update
sudo apt-get install dvc
sudo wget https://dvc.org/rpm/dvc.repo -O /etc/yum.repos.d/dvc.repo
sudo yum update
sudo yum install dvc
Unofficial package, any inquiries regarding the AUR package, refer to the maintainer.
yay -S dvc
- Git-annex - DVC uses the idea of storing the content of large files (that you don't want to see in your Git repository) in a local key-value store and uses file hardlinks/symlinks instead of the copying actual files.
- Git-LFS - DVC is compatible with any remote storage (S3, Google Cloud, Azure, SSH, etc). DVC utilizes reflinks or hardlinks to avoid copy operation on checkouts which makes much more efficient for large data files.
- Makefile (and its analogues). DVC tracks dependencies (DAG).
- Workflow Management Systems. DVC is a workflow management system designed specifically to manage machine learning experiments. DVC is built on top of Git.
- DAGsHub Is a Github equivalent for DVC - pushing your Git+DVC based repo to DAGsHub will give you a high level dashboard of your project, including DVC pipeline and metrics visualizations, as well as links to DVC managed files if they are in cloud storage.
Contributions are welcome! Please see our Contributing Guide for more details.
Want to stay up to date? Want to help improve DVC by participating in our occasional polls? Subscribe to our mailing list. No spam, really low traffic.
This project is distributed under the Apache license version 2.0 (see the LICENSE file in the project root).
By submitting a pull request for this project, you agree to license your contribution under the Apache license version 2.0 to this project.