Skip to content

Commit

Permalink
Support SUSTech1K
Browse files Browse the repository at this point in the history
  • Loading branch information
chuanfushen committed Jul 15, 2023
1 parent 3795705 commit 66971ea
Show file tree
Hide file tree
Showing 8 changed files with 1,725 additions and 17 deletions.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,14 @@
<div align="center"><img src="./assets/nm.gif" width = "100" height = "100" alt="nm" /><img src="./assets/bg.gif" width = "100" height = "100" alt="bg" /><img src="./assets/cl.gif" width = "100" height = "100" alt="cl" /></div>

------------------------------------------
📣📣📣 **[*SUSTech1K*](https://lidargait.github.io) relseased, pls checking the [tutorial](datasets/SUSTech1K/README.md).** 📣📣📣

🎉🎉🎉 **[*OpenGait*](https://openaccess.thecvf.com/content/CVPR2023/papers/Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023_paper.pdf) has been accpected by CVPR2023 as a highlight paper!** 🎉🎉🎉

OpenGait is a flexible and extensible gait recognition project provided by the [Shiqi Yu Group](https://faculty.sustech.edu.cn/yusq/) and supported in part by [WATRIX.AI](http://www.watrix.ai).

## What's New
- **[July 2023]** [SUSTech1K](datasets/SUSTech1K/README.md) is released and supported by OpenGait.
- **[May 2023]** A real gait recognition system [All-in-One-Gait](https://github.com/jdyjjj/All-in-One-Gait) provided by [Dongyang Jin](https://github.com/jdyjjj) is avaliable.
- [Apr 2023] [CASIA-E](datasets/CASIA-E/README.md) is supported by OpenGait.
- [Feb 2023] [HID 2023 competition](https://hid2023.iapr-tc4.org/) is open, welcome to participate. Additionally, tutorial for the competition has been updated in [datasets/HID/](./datasets/HID).
Expand Down Expand Up @@ -50,7 +53,7 @@ Results and models are available in the [model zoo](docs/1.model_zoo.md).
## Authors:
**Open Gait Team (OGT)**
- [Chao Fan (樊超)](https://chaofan996.github.io), [email protected]
- [Chuanfu Shen (沈川福)](https://faculty.sustech.edu.cn/?p=95396&tagid=yusq&cat=2&iscss=1&snapid=1&orderby=date), [email protected]
- [Chuanfu Shen (沈川福)](https://chuanfushen.github.io), [email protected]
- [Junhao Liang (梁峻豪)](https://faculty.sustech.edu.cn/?p=95401&tagid=yusq&cat=2&iscss=1&snapid=1&orderby=date), [email protected]

## Acknowledgement
Expand Down
101 changes: 101 additions & 0 deletions configs/lidargait/lidargait_sustech1k.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
data_cfg:
dataset_name: SUSTech1K
dataset_root: your_path_of_SUSTech1K-Released-pkl
dataset_partition: ./datasets/SUSTech1K/SUSTech1K.json
num_workers: 4
data_in_use: [false, true, false, false, false, false, false, false, false, false, false, false, false, false, false, false]
remove_no_gallery: false # Remove probe if no gallery for it
test_dataset_name: SUSTech1K

evaluator_cfg:
enable_float16: true
restore_ckpt_strict: true
restore_hint: 40000
save_name: LidarGait
eval_func: evaluate_indoor_dataset #evaluate_Gait3D
sampler:
batch_shuffle: false
batch_size: 4
sample_type: all_ordered # all indicates whole sequence used to test, while ordered means input sequence by its natural order; Other options: fixed_unordered
frames_all_limit: 720 # limit the number of sampled frames to prevent out of memory
metric: euc # cos
transform:
- type: BaseSilTransform

loss_cfg:
- loss_term_weight: 1.0
margin: 0.2
type: TripletLoss
log_prefix: triplet
- loss_term_weight: 1.0
scale: 16
type: CrossEntropyLoss
log_prefix: softmax
log_accuracy: true

model_cfg:
model: Baseline
backbone_cfg:
type: ResNet9
in_channel: 3
block: BasicBlock
channels: # Layers configuration for automatically model construction
- 64
- 128
- 256
- 512
layers:
- 1
- 1
- 1
- 1
strides:
- 1
- 2
- 2
- 1
maxpool: false
SeparateFCs:
in_channels: 512
out_channels: 256
parts_num: 16
SeparateBNNecks:
class_num: 250
in_channels: 256
parts_num: 16
bin_num:
- 16

optimizer_cfg:
lr: 0.1
momentum: 0.9
solver: SGD
weight_decay: 0.0005

scheduler_cfg:
gamma: 0.1
milestones: # Learning Rate Reduction at each milestones
- 20000
- 30000
scheduler: MultiStepLR
trainer_cfg:
enable_float16: true # half_percesion float for memory reduction and speedup
fix_BN: false
with_test: true #true
log_iter: 100
restore_ckpt_strict: true
restore_hint: 0
save_iter: 5000
save_name: LidarGait
sync_BN: true
total_iter: 40000
sampler:
batch_shuffle: true
batch_size:
- 8 # TripletSampler, batch_size[0] indicates Number of Identity
- 8 # batch_size[1] indicates Samples sequqnce for each Identity
frames_num_fixed: 10 # fixed frames number for training
sample_type: fixed_unordered # fixed control input frames number, unordered for controlling order of input tensor; Other options: unfixed_ordered or all_ordered
type: TripletSampler
transform:
- type: BaseSilTransform
33 changes: 33 additions & 0 deletions datasets/SUSTech1K/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Tutorial for [SUSTech1K](https://lidargait.github.io)

## Download the SUSTech1K dataset
Download the dataset from the [link](https://lidargait.github.io).
decompress these two file by following command:
```shell
unzip -P password SUSTech1K-pkl.zip | xargs -n1 tar xzvf
```
password should be obtained by signing [agreement](https://lidargait.github.io/static/resources/SUSTech1KAgreement.pdf) and sending to email ([email protected])

## Train the dataset
Modify the `dataset_root` in `configs/lidargait/lidargait_sustech1k.yaml`, and then run this command:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs configs/lidargait/lidargait_sustech1k.yaml --phase train
```


## Process from RAW dataset

### Preprocess the dataset (Optional)
Download the raw dataset from the [official link](https://lidargait.github.io). You will get two compressed files, i.e. `DATASET_DOWNLOAD.md5`, `SUSTeck1K-RAW.zip`, and `SUSTeck1K-pkl.zip`.
We recommend using our provided pickle files for convenience, or process raw dataset into pickle by this command:
```shell
python datasets/SUSTech1K/pretreatment_SUSTech1K.py -i SUSTech1K-Released-2023 -o SUSTech1K-pkl -n 8
```

### Projecting PointCloud into Depth image (Optional)
You can use our processed depth images, or you can process via the command:
```shell
python datasets/SUSTech1K/point2depth.py -i SUSTech1K-Released-2023/ -o SUSTech1K-Released-2023/ -n 8
```
We recommend using our provided depth images for convenience.

Loading

0 comments on commit 66971ea

Please sign in to comment.