- Ubuntu 16.04
- Python 3.7
- CUDA 11.1 (lower versions may work but were not tested)
- NVIDIA GPU (>= 11G graphic memory) + CuDNN v7.3
This repository has been tested on RTX 3090. Configurations (e.g batch size, image patch size) may need to be changed on different platforms.
- Clone this repo:
cd CDARTS_segmentation
- Install dependencies:
bash install.sh
- Download the leftImg8bit_trainvaltest.zip and gtFine_trainvaltest.zip from the Cityscapes.
- Prepare the annotations by using the createTrainIdLabelImgs.py.
- Put the file of image list into where you save the dataset.
cd HRTNet/train
- Set the dataset path via
ln -s $YOUR_DATA_PATH ../DATASET
- Set the output path via
mkdir ../OUTPUT
- Train from scratch
export DETECTRON2_DATASETS="$Your_DATA_PATH"
NGPUS=8
python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py --world_size $NGPUS --seed 12367 --config ../configs/cityscapes/cydas.yaml
We provide training models and logs, which can be downloaded from Google Drive.
cd train
- Download the pretrained weights of the from Google Drive.
- Set
config.model_path = $YOUR_MODEL_PATH
incydas.yaml
. - Set
config.json_file = $CDARTS_MODEL
incydas.yaml
. - Start the evaluation process:
CUDA_VISIBLE_DEVICES=0 python test.py