Table of Contents
git clone https://github.com/LeDuySon/ALL_SCRIPTS.git
conda create -n {name} python=3.8
conda activate {name}
pip install -r requirement.txt
{ROOT} ├── video1.mp4 ├── video1 │ └── gt │ ├── gt.txt │ └── labels.txt ├── video2.mp4 ├── video2 │ └── gt │ ├── gt.txt │ └── labels.txt ├── video3.mp4 ├── video3 │ └── gt │ ├── gt.txt │ └── labels.txt
-
- Create a folder that have structure likes MOT dataset
- Go to folder generate_fairmot_dataset/, run to generate frame
- video_path: Path to video file (Only support .mp4)
- save_path: Save folder path ( eg: {ROOT}/images/train or {ROOT}/images/test)
- frame_interval: Number of frame between 2 saving frames
- If you want to run on multiple video, run:
- After that, run:
- root_path: Path to your MOT dataset folder train or test(eg: {ROOT_MOT}/images/train or {ROOT_MOT}/images/test
- gt_path: Path to groundtruth folder ( eg: {ROOT}/train or {ROOT}/test)
- If you want to run on multiple video, run:
- Just wait and have a coffee
bash create_folder_tree.sh {Name of ROOT_MOT}
python generate_fairmot_dataset.py --video_path {video_path} --save_path {save_path} --frame_interval {frame_interval}
Note:
bash gen_frames {folder contain your .mp4 files} {save_path} {frame_interval}
python create_dataset.py --root_path {root_path} --gt_path {gt_path}
Note:
{ROOT_MOT}
├── images
│ ├── test
│ └── train
└── labels_with_ids
Your dataset(train|test) must be the same as ours datasets
- Generate reid dataset for multiple videos with mode train or test
- If you want to prepare dataset entirely, run this:
- Notes about {mode} args
- train: only save in train/ folder
- test: save to gallery/ folder and then split to query/ folder
- normal: save to train/ folder and then split to gallery/ and query/ folder
python generate_reid_dataset_multiple.py --data_path {path to your dataset, eg {ROOT} in our dataset} \
--save_path {name of saving folders} --mode {mode train|test|normal}
bash create_data.sh {path to your train dataset} {path to your test dataset} {name of saving folders}
. ├── gallery ├── query └── train ├── 0 ├── 1 ├── 2 ├── 3 ├── 4 ├── 5 ├── 6 ├── 7 ├── 8 └── 9
{ROOT_COCO}
├── annotations
├── test
├── train
└── val
Your dataset(train|test) must be the same as ours datasets
- Create a folder that have structure likes COCO dataset
- Run to generate frame from video
- video_path: Path to video file (Only support .mp4)
- save_path: Save folder path ( eg: {ROOT_COCO}/train or {ROOT_COCO}/test)
- frame_interval: Number of frame between 2 saving frames
- If you want to run on multiple video, run:
- After that, run:
- root_path: Path to your MOT dataset folder train or test(eg: {ROOT_COCO}/train or {ROOT_COCO}/test
- gt_path: Path to groundtruth folder ( eg: {ROOT}/train or {ROOT}/test)
- Finally, run this script to convert your dataset to coco format (get json files in annotation folder):
- Create new Exp class based on MOT examples in ByteTrack/exps/example/mot:
- Change path to your annotations json files in __init__()
- Change data_dir path in get_eval_loader() to your dataset path (default mot/)
- Change name of your dataset type in get_eval_loader() (default "train")
bash create_folder_tree.sh {Name of ROOT_COCO}
python generate_frame_from_videos.py --video_path {video_path} --save_path {save_path} --frame_interval {frame_interval}
Note:
bash gen_frames {folder contain your .mp4 files} {save_path} {frame_interval}
python create_dataset.py --root_path {root_path} --gt_path {gt_path}
Note:
python convert_vtx_to_coco.py --data_path {ROOT_COCO}