code_review
is the replication package of the work "Towards Automating Code Review Activities".
The purpose of this repository is to provide everything necessary to replicate our results.
- Python >= 3.5 and <= 3.8
- OpenNMT-tf
Use the following commands to install the dependencies.
pip install --upgrade pip
pip install OpenNMT-tf
The datasets
folder contains two datasets used for experimenting with two Transformer models. Both datasets are split in three parts training
(80%), validation
(10%), and test
(10%) sets;
Each Transformer model (I.e., 1-encoder
and 2-encoders
folders) has three sub-folders train
, eval
, and test
with the actual data.
-
The dataset for the
1-encoder
model is composed of Reviewed Code Pairs (RCPs). A RCP is a <ms, mr> pair composed by the abstracted code of the method extracted from the Java file submitted by a contributor for review (ms) and by the abstracted code of its revised version (mr). We provide data in the form of textual file organized per row, therefore, a pair <ms, mr> refers to the same line number of the following two files:src
file contains the ms instances;tgt
file contains the mr instances.
-
The dataset for the
2-encoders
model is, instead, composed of Reviewed Commented Code Triplets (RCCTs), each triple have the form <ms, mr, rnl>, where mr is the abstracted code of the method implementing the natural language recommendation (rnl) provided by a reviewer for the method in the code submitted (ms). Therefore, to build a triple <ms, mr, rnl> three files are provided:src1
file contains the rnl instances;src2
file contains the ms instances;tgt
file contains the mr instances.
NOTE. The natural language recommendations have been cleaned and abstracted as described in our manuscript.
The code
folder contains all scripts used to train and test the models.
For both models, the best configuration we found through the tuning of the hyperparameters is provided in the respective folders. To start the training of a model it is sufficient to run the trainin.py
file. It first will create the vocabularies needed and then it will start the training. The trained model will be saved in the run
folder.
Once the model is trained, it is possible to test it on the test set by running the infer.py
file. This script creates the predictions.txt
file containing all the model's predictions.
It is, also, possible to change the beam search size modifying the beam_width
and num_hypotheses
parameters in the training/data.yml
file.
This repository also contains:
idioms.csv
: the list of idioms we used during abstraction phase;is_relevant.ipynb
: a jupyter file showing the logic used to remove the not relevant comments;1_encoder_perfect_predictions.xlsx
and2_encoders_perfect_predictions.xlsx
: the qualitative analysis of perfect predictions.bleu4_boxplot.zip
: a compressed file containing the boxplots of the BLUE4 score of the predictions. Read the README in the compressed file for additional information.generated-predictions.zip
: a compressed file containing the predictions generated by both models. Read the README in the compressed file for additional information.FiltersTable.png
: a table showing the numeric results of the filters we applied to the data. For each filter we reported the number of triplets removed from it.