Skip to content

Latest commit

 

History

History
164 lines (113 loc) · 7.53 KB

README.md

File metadata and controls

164 lines (113 loc) · 7.53 KB

Covrig Run tests

What is Covrig?

Covrig is a flexible infrastructure that can be used to run each version of a system in isolation to collect static and dynamic software metrics (code coverage, lines of code), originally developed by Paul Marinescu and Petr Hosek at Imperial College London.

Changelog (April 2023)

  • Upgraded to python3 3 (incl. deps)
  • Added more examples for containers
  • Added support for differential coverage calculation
  • Rewrote and extended postprocessing graph generation
  • Wrote basic tests for analytics.py and Github CI

Building

To build the project, you will need:

Once these dependencies are installed, you will need to generate an ssh keypair to connect to the VMs. Keep the private key in your .ssh directory and replace the id_rsa files in each containers/<repo> with your public key.

To build a container, run this from the root of the repo:

docker build -t <image_name>:<tag> -f containers/<repo>/Dockerfile containers/<repo>

Usage

python3 analytics.py <benchmark>

Base benchmarks consist of lighttpd, redis, memcached, zeromq, binutils and git circa 2013.

Newly added benchmarks include apr, curl and vim.

The format for these containers is relatively simple. The Dockerfile contains the instructions for building the container.

The full options are

usage: python3 analytics.py [-h] [--offline] [--resume] [--limit LIMIT] [--output OUTPUT] [--endatcommit COMMIT]
                           program [revisions]

positional arguments:
  program          program to analyse
  revisions        number of revisions to process

optional arguments:
  -h, --help            show this help message and exit
  --offline             process the revisions reusing previous coverage information
  --resume              resume processing from the last revision found in data file
                        (e.g. data/<program>/<program>.csv)
  --limit LIMIT         limit to n number of revisions
  --output OUTPUT       output file name
  --image IMAGE         use a particular Docker image to analyze
  --endatcommit COMMIT  end processing at commit. Useful for debugging
                        (e.g. python3 analytics.py --endatcommit a1b2c3d redis 1 can help debug issues with a certain commit)
                        Determining what commit to end at if you know the commit to start at can be found using the script `utils/commit_range.sh`          

examples:
  python3 analytics.py redis 100
  python3 analytics.py --offline redis 100
  python3 analytics.py --image redis:latest --endatcommit a1b2c3d redis 1

Scenario: Nothing works! I need an image!

Solution: The images aren't currently yet autogenerated on the running the scripts, so before running you may need to generate the image from the Dockerfiles in containers/. For example, to generate the image for Redis, run docker build -t redis:latest -f containers/redis/Dockerfile containers/redis. You can then specify the image (useful when a repo requires multiple images, e.g. lighttpd2) as follows: python3 analytics.py --image redis:latest or python3 analytics.py --image lighttpd2:16.


Scenario: python3 analytics.py redis was interrupted (bug in the code, power failure, etc.)

Solution: python3 analytics.py --resume redis. For accurate latent patch coverage info, also run python3 analytics.py --offline redis (Note: will not work with --endatcommit option)


Scenario: python3 analytics.py zeromq 300 executed correctly but you realised that you need to analyse 500 revisions

Solution: python3 analytics.py --limit 200 zeromq 500 analyses the previous 200 revisions and appends them to the csv output. postprocessing/regen.sh data/Zeromq/Zeromq.csv repos/zeromq/ will regenerate the output file, putting all the lines in order (you need repos/zeromq to be a valid zeromq git repostory). For accurate latent patch coverage info, also run python3 analytics.py --offline zeromq 500


Scenario: I want to analyse a particular revision or set of revisions.

Solution (1): python3 analytics.py --endatcommit a1b2c3d redis 1 will analyse the revision a1b2c3d for redis.

Solution (2): python3 analytics.py --endatcommit a1b2c3d redis 2 will analyse the revision before and then then revision a1b2c3d for redis.


Scenario: The data is collected too slowly! How to help speed it up?

Solution: Use the script utils/run_analytics_parallel.sh <repo> <num_commits> <num_processes> <image> [end_commit] Example: utils/run_analytics_parallel.sh redis 100 4 redis:latest will run 4 processes in parallel, each processing 25 commits.


Scenario: Experiments were executed. How to get meaningful data?

(Old) Solution: Run postprocessing/makeartefacts.sh. Graphs are placed in graphs/, LaTeX defines are placed in latex/

(New) Solution: Run python3 postprocessing/gen_graphs.py <data/dir>. Graphs are placed in graphs/. Example: python3 postprocessing/gen_graphs.py data/Redis/ or python3 postprocessing/gen_graphs.py --dir data to generate graphs for all benchmarks. Ideal file structure is data/Redis/Redis.csv data/Binutils/Binutils.csv etc.


Scenario: How to get non-determinism data?

Solution: Run the same benchmark multiple times

for I in 1 2 3 4 5; do python3 analytics.py --output Redis$I redis ; done

To get the results, run

postprocessing/nondet.sh data/Redis1/Redis.csv data/Redis1 data/Redis2 data/Redis3 data/Redis4 data/Redis5

Scenario: I have a list of revisions. How do I get more interesting information about them?

Solution: Run

./postprocessing/fixcoverage-multiple.sh repos/memcached/ bugs/bugs-memcached.simple data/Memcached/ data/Memcached/Memcached.csv

The first argument is a local clone of the target git repository, the second argument is a file with the list of revisions which fix bugs (one per line), the third argument is a folder which contains the results of the analytics.py script and the optional fourth argument is the analytics .csv output. The output looks like

Looked at 46 fixes (1 unhandled): 179 lines covered, 68 lines not covered
4 fixes did not change/add code, 28 fixes were fully covered
only tests/only code/tests and code 0/18/23

This can be used to get details about new tests/code. For example, running this on a list of bug fixing revisions can show how well fixes are tested and whether a regression test is added along with the revision. Running this on a list of bug introducing revisions may show low coverage.


Scenario: I have a list of revisions. How do I get more interesting information about the code from the previous revision?

Solution: As before, but use the postprocessing/faultcoverage-multiple.sh script.

This can be used to analyse buggy code coverage. Running this on a list of bug fixing revisions is intuitively similar to running the previous script on a list of revisions introducing the respective bugs.


Tests

You can find the tests used by the Github CI in tests/. These can be run locally with ./tests/runtests.sh.