Instructions for V2W-BERT
Dataset: https://drive.google.com/drive/folders/10E6nOXhRERhAmRVWla5i99jeUGWmI-Xl?usp=sharing
Download and extract NVD dataset and keep it in the Dataset/NVD/Processed
directory. Or execute PrepareDataset.ipynb
or PrepareDataset.py
to download and prepare dataset.
Check your system if Anaconda module is available. If anaconda is not available install packages in the python base. If anaconda is available, then create a virtual enviroment to manage python packages.
- Load Module:
load module anaconda/version_xxx
- Create virtual environment:
conda create -n v2wbert python=3.7
. Here python version 3.7 is considered. - Activate virtual environement:
conda activate v2wbert
orsource activate v2wbert
Other necessary commands for managing enviroment can be found here : https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-with-commands
The installations are considered for python version 3.7
Most python packages are intalled using pip
or conda
command. For consistency it's better to follow only one of them. If anaconda not available install packages in python base using pip
command.
Link of Pytorch installation is here: https://pytorch.org/. If Pytorch is already installed then this is not necessary.
Only some functionalities of tensorflow is used in the project. If tensorflow is not available in the system, I will try to replace those with another function. Any version of tensorflow will do.
https://www.tensorflow.org/overview/
Command: pip install numpy
More details can be found here, https://numpy.org/install/
Command: pip install pandas
https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html
We will be using HuggingFace (https://github.com/huggingface/transformers) library for transformers.
pip install transformers
pip install wget
pip install ipywidgets
pip install ipynb
This is not necessary now but later.
pip install beautifulsoup4
pip install lxml
- Run
PrepareDataset.ipynb
notebook for download and prepare dataset
python PrepareDataset.py --dir='Dataset' --from_year=2020 --to_year=2021 --from_train_year=1990 --to_train_year=2020 --from_test_year=2021 --to_test_year=2021 --from_val_year=2022 --to_test_year=2022
-- Pretraining V2WBERT-Pretraining.ipynb
python V2WBERT-Pretraining.py --pretrained='distilbert-base-uncased' --num_gpus=2 --parallel_mode='dp' --epochs=30 --batch_size=16 --refresh_rate=200
- Running 'dummy' dataset to test the overall process
python V2W-BERT-Pretraining.py --pretrained='distilbert-base-uncased' --num_gpus=2 --parallel_mode='dp' --epochs=30 --batch_size=16 --refresh_rate=200 --rand_dataset='dummy'
- Temporal dataset splits data by year
python V2W-BERT-Pretraining.py --pretrained='distilbert-base-uncased' --num_gpus=2 --parallel_mode='dp' --epochs=30 --batch_size=16 --refresh_rate=200 --rand_dataset='temporal'
- Random dataset splits data from each category
python V2W-BERT-Pretraining.py --pretrained='distilbert-base-uncased' --num_gpus=2 --parallel_mode='dp' --epochs=30 --batch_size=16 --refresh_rate=200 --rand_dataset='random'
- To run in distributed dataparallel mode
python V2W-BERT-Pretraining.py --pretrained='distilbert-base-uncased' --num_gpus=2 --parallel_mode='ddp' --epochs=30 --batch_size=16 --refresh_rate=200 --rand_dataset='random'
- Link Prediciton
V2WBERT-LinkPrediction.ipynb
python V2W-BERT-LinkPrediction.py --pretrained='distilbert-base-uncased' --use_pretrained=True --use_rd=False --checkpointing=False --rand_dataset='temporal' --performance_mode=False --neg_link=128 --epoch=25 --nodes=1 --num_gpus=2 --batch_size=64
- other dataset
python V2W-BERT-LinkPrediction.py --pretrained='distilbert-base-uncased' --use_pretrained=True --use_rd=False --checkpointing=False --rand_dataset='random' --performance_mode=False --neg_link=128 --epoch=25 --nodes=1 --num_gpus=2 --batch_size=64
- Running 'dummy' dataset to test the overall process
python V2W-BERT-LinkPrediction.py --pretrained='distilbert-base-uncased' --use_pretrained=True --use_rd=False --checkpointing=False --rand_dataset='dummy' --performance_mode=False --neg_link=128 --epoch=25 --nodes=1 --num_gpus=2 --batch_size=64
Please cite our paper if you use this code in your own work:
@inproceedings{das2021v2w,
title={V2W-BERT: A Framework for Effective Hierarchical Multiclass Classification of Software Vulnerabilities},
author={Das, Siddhartha Shankar and Serra, Edoardo and Halappanavar, Mahantesh and Pothen, Alex and Al-Shaer, Ehab},
booktitle={2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)},
pages={1--12},
year={2021},
organization={IEEE}
}
Feel free to email us for additional resources. If you notice anything unexpected, please open an issue and let us know. If you have any questions or are missing a specific feature, feel free to discuss them with us.
This material was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the United States Department of Energy, nor Battelle, nor any of their employees, nor any jurisdiction or organization that has cooperated in the development of these materials, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness or any information, apparatus, product, software, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or Battelle Memorial Institute. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
PACIFIC NORTHWEST NATIONAL LABORATORY
operated by
BATTELLE
for the
UNITED STATES DEPARTMENT OF ENERGY
under Contract DE-AC05-76RL01830