Covrig is a flexible infrastructure that can be used to run each version of a system in isolation to collect static and dynamic software metrics (code coverage, lines of code), originally developed by Paul Marinescu and Petr Hosek at Imperial College London.
- Upgraded to python3 (incl. deps)
- Added more examples for containers
- Added support for differential coverage calculation
- Rewrote and extended postprocessing graph generation
- Wrote basic tests for
analytics.py
and Github CI
To build the project, you will need:
- Python 3.8 or higher
- Docker (see https://docs.docker.com/engine/install/ubuntu/)
- Python packages: docker, fabric 2.7.1, and matplotlib 3.7.0
- LCOV 2.0 or higher needed for differential coverage (https://github.com/linux-test-project/lcov)
NOTE: This project was developed on Linux (Ubuntu 20). It may work on other platforms, but this is not guaranteed since we do use shell commands when processing the data - the commands run on spawned Docker containers will of course be fine though.
Covrig works by spawning a series of VMs to run revisions of software. To connect to these VMs, we automatically SSH to them. To do this, generate an SSH keypair with ssh-keygen
.
Keep the private key in your ~/.ssh
directory and replace the id_rsa.pub
files in each containers/<repo>
with your public key that you generated in ~/.ssh
for each repo you would like to generate data for.
To build a repo's container from a Dockerfile, run this from the root of the project:
docker build -t <image_name>:<tag> -f containers/<repo>/Dockerfile containers/<repo>
For further analytics (e.g. some graphs), you may need local copies of the repos you are testing to be present in a /repos
directory in the root of the project.
python3 analytics.py <benchmark>
Base benchmarks consist of lighttpd
, redis
, memcached
, zeromq
, binutils
and git
circa 2013.
Newly added benchmarks include apr
, curl
and vim
.
The format for these containers is relatively simple. The Dockerfile
contains the instructions for building the container.
The full options are
usage: python3 analytics.py [-h] [--offline] [--resume] [--limit LIMIT] [--output OUTPUT] [--endatcommit COMMIT]
program revisions
positional arguments:
program program to analyse
revisions number of revisions to process
optional arguments:
-h, --help show this help message and exit
--offline process the revisions reusing previous coverage information
--resume resume processing from the last revision found in data file
(e.g. data/<program>/<program>.csv)
--limit LIMIT limit to n number of revisions (use the positional argument revisions if not sure)
--output OUTPUT output file name
--image IMAGE use a particular Docker image to analyze
--endatcommit COMMIT end processing at commit. Useful for debugging
(e.g. python3 analytics.py --endatcommit a1b2c3d redis 1 can help debug issues with a certain commit)
Determining what commit to end at if you know the commit to start at can be found using the script `utils/commit_range.sh`
examples:
python3 analytics.py redis 100
python3 analytics.py --offline redis 100
python3 analytics.py --image redis:latest --endatcommit 299b8f7 redis 1
Scenario: Nothing works! I need an image!
Solution: The images aren't currently yet autogenerated on the running the scripts, so before running you may need to generate the image from the Dockerfiles in containers/
.
For example, to generate the image for Redis, run docker build -t redis:latest -f containers/redis/Dockerfile containers/redis
.
You can then specify the image (useful when a repo requires multiple images, e.g. lighttpd2
) as follows:
python3 analytics.py --image redis:latest
or python3 analytics.py --image lighttpd2:16
.
Scenario: python3 analytics.py redis
was interrupted (bug in the code, power failure, etc.)
Solution: python3 analytics.py --resume redis
. For accurate latent patch coverage info, also run python3 analytics.py --offline redis
(Note: will not work with --endatcommit
option)
Scenario: python3 analytics.py zeromq 300
executed correctly but you realised that you need to analyse 500 revisions
Solution: python3 analytics.py --limit 200 zeromq 500
analyses the previous 200 revisions and appends them to the csv output. postprocessing/regen.sh data/Zeromq/Zeromq.csv repos/zeromq/
will regenerate the output file, putting all the lines in order (you need repos/zeromq to be a valid zeromq git repostory). For accurate latent patch coverage info, also run python3 analytics.py --offline zeromq 500
Scenario: I want to analyse a particular revision or set of revisions.
Solution (1): python3 analytics.py --endatcommit a1b2c3d redis 1
will analyse the revision a1b2c3d
for redis
.
Solution (2): python3 analytics.py --endatcommit a1b2c3d redis 2
will analyse the revision before and then then revision a1b2c3d
for redis
.
Scenario: The data is collected too slowly! How to help speed it up?
Solution: Use the script utils/run_analytics_parallel.sh <repo> <num_commits> <num_processes> <image> [end_commit]
Example: utils/run_analytics_parallel.sh redis 100 4 redis:latest
will run 4 processes in parallel, each processing 25 commits.
Scenario: Experiments were executed. How to get meaningful data?
(Old) Solution: Run postprocessing/makeartefacts.sh
. Graphs are placed in graphs/, LaTeX defines are placed in latex/
(New) Solution: Run python3 postprocessing/gen_graphs.py <data/dir>
. Graphs are placed in graphs/.
Example: python3 postprocessing/gen_graphs.py data/Redis/
or
python3 postprocessing/gen_graphs.py --dir data
to generate graphs for all benchmarks.
Ideal file structure is data/Redis/Redis.csv
data/Binutils/Binutils.csv
etc.
To get pure differential coverage information, run utils/diffcov.sh
.
Example: utils/diffcov.sh apr remotedata/apr/coverage/ 886b908 8fb7fa4
A quicker script if your file structure is correct is utils/diffcov_runner.sh
, which will also convert the data into CSVs and places them in the relevant directory alongside the original data (e.g. in the data/<repo>
directory). These can then be graphed - see below.
As above, we can generate all the graphs using the gen_graphs.py
script.
For example, we can run python3 postprocessing/gen_graphs.py <data/dir>
. Graphs are placed in graphs/.
Example: python3 postprocessing/gen_graphs.py data/Redis/
for a single repo or
python3 postprocessing/gen_graphs.py --dir data
to generate graphs for all benchmarks. (note this requires files to be )
Ideal file structure is data/Redis/Redis.csv
data/Binutils/Binutils.csv
etc.
If differential coverage data has been generated as above, run with the optional --diffcov
argument to generate graphs for differential data.
Example: python3 postprocessing/gen_graphs.py --diffcov --dir remotedata
Similar to graphs, we can generate the relevant tables using the get_stats.py
script.
For example, we can run python3 postprocessing/get_stats.py <data/dir>
. Graphs are placed in graphs/.
Example: python3 postprocessing/get_stats.py data/Redis/
for a single repo or
python3 postprocessing/get_stats.py --dir data
to generate graphs for all benchmarks.
You can find the tests used by the Github CI in tests/
. These can be run locally with ./tests/runtests.sh
.
A pre-built docker image is available on Zenodo to be environment agnostic, and once downloaded can be extracted with docker load -i covrig_artifact.tar.gz
.
To run the image, you will need to have Docker installed.
To run the image, use the following command to open an interactive terminal:
docker run -v /var/run/docker.sock:/var/run/docker.sock -it covrig:latest
Then cd root
and you will find the project in the root directory.
All commands then can be run from here (tested on Redis, python3 analytics.py --image redis:latest --endatcommit 299b8f7 redis 1
).
To reproduce the results of the paper, you will need the dataset.
The dataset gathered can be found on Zenodo at https://zenodo.org/records/8054755 and https://zenodo.org/records/8059463.
To reproduce the results, create a folder remotedata/
in the root directory of the project and then extract the archives as such:
tar -xvf <dataset_location>/remotedata-18-06-23-apr.tar.bz2 -C <covrig_location>/covrig/remotedata
You can equally just extract the CSVs for speed and omit the coverage archives.
Then just specify something like ... apr apr/Apr_repeats.csv apr/diffcov_apr.csv
as arguments to the tar command.
The CSV files contain all commits in a range, but we only analyse those that make changes to executable or test code or both.
Since these archives are the results of the analysis, the only thing left to do is to generate the graphs and tables from them.
To generate the graphs, run python3 postprocessing/gen_graphs.py --dir remotedata
.
To generate the tables, run python3 postprocessing/get_stats.py --dir remotedata
.
The non-determinism data can also be downloaded from Zenodo and extracted in the same way as above. This is only needed for a small subset of the figures and tables (ones that display statistics concerning the number of flaky lines per project).