A Python library for StyleGAN-based generation of synthetic baggage X-ray images for prohibited item detection.
The package consists of a StyleGAN2-ADA model designed to simulate baggage X-ray/CT scans for airport security screening. The training and testing scripts in the open-source package are used to train the model on the PIDRay benchmark which can be downloaded from this link. The dataset consists of ~47,000 samples of baggage scans packed with 14 different prohibited items for airport security inspection.
The model has been used to create a framework for generating annotated baggage X-ray scans for security inspection. The framework has been described in our paper: Self-Supervised One-Shot Learning for Automatic Segmentation of StyleGAN Images. The code for the same can be found in this repo.
For access to the complete code containing pre-trained models for other baggage X-ray datasets and with additional functionalities, please contact the authors.
- Python3 (>v3.7.8)
- PyTorch (>v1.7.1)
- torchvision (>v0.8.2)
- CUDA (>v9.2)
The code package can be cloned from the git repository using:
> git clone https://github.com/avm-debatr/bagganhq.git
Before running the scripts, the conda environment for the code can be created using the environment.yaml
file provided with the scripts.
> cd bagganhq
> conda env create --name bagenv --file=environment.yaml
> conda activate bagenv
To use the presaved models for BagGAN, the user can download the contents from this link and copy them in checkpoints/
directory.
To train the BagGAN model from a scratch, the user can run the following code:
> python train.py --config=configs/config_pidray_ds_train.py --out_dir=results/pidray_baggan --ds=<path-to-PIDRay-dataset>
In order to run the script, the user must download the PIDRay dataset and specify its path using the --ds
argument.
The model, dataset and training parameters are specified in a config.py
file whose path is specified using the --config
argument. Examples for reference are provided in the configs/
directory.
To test the BagGAN model from a scratch, the user can run the following code:
> python test.py --config=configs/config_pidray_ds_test.py --out_dir=results/pidray_baggan --ds=<path-to-PIDRay-dataset>
In order to run the script, the user will download the PIDRay dataset and specify its path using the --ds
argument.
The model, dataset and training parameters are specified in a config.py
file whose path is specified using the --config
argument. Examples for reference are provided in the configs/
directory.
Public Domain, Copyright © 2023, Robot Vision Lab, Purdue University.
@article{manerikar2023self,
title={Self-Supervised One-Shot Learning for Automatic Segmentation of StyleGAN Images},
author={Manerikar, Ankit and Kak, Avinash C},
journal={arXiv preprint arXiv:2303.05639},
year={2023}
}
The authors can be contacted at: [email protected] (Ankit Manerikar)