Skip to content

Latest commit

 

History

History
88 lines (86 loc) · 4.09 KB

5.advanced_usages.md

File metadata and controls

88 lines (86 loc) · 4.09 KB

Advanced Usages

Cross-Dataset Evalution

You can conduct cross-dataset evalution by just modifying several arguments in your data_cfg.

Take baseline.yaml as an example:

data_cfg:
  dataset_name: CASIA-B
  dataset_root:  your_path
  dataset_partition: ./datasets/CASIA-B/CASIA-B_include_005.json
  num_workers: 1
  remove_no_gallery: false # Remove probe if no gallery for it
  test_dataset_name: CASIA-B

Now, suppose we get the model trained on CASIA-B, and then we want to test it on OUMVLP.

We should alter the dataset_root, dataset_partition and test_dataset_name, just like:

data_cfg:
  dataset_name: CASIA-B
  dataset_root:  your_OUMVLP_path
  dataset_partition: ./datasets/OUMVLP/OUMVLP.json
  num_workers: 1
  remove_no_gallery: false # Remove probe if no gallery for it
  test_dataset_name: OUMVLP

Data Augmentation

In OpenGait, there is a basic transform class almost called by all the models, this is BaseSilCuttingTransform, which is used to cut the input silhouettes.

Accordingly, by referring to this implementation, you can easily customize the data agumentation in just two steps:

  • Step1: Define the transform function or class in transform.py, and make sure it callable. The style of torchvision.transforms is recommanded, and following shows a demo;
import torchvision.transforms as T
class demo1():
    def __init__(self, args):
        pass
    
    def __call__(self, seqs):
        '''
            seqs: with dimension of [sequence, height, width]
        '''
        pass
        return seqs

class demo2():
    def __init__(self, args):
        pass
    
    def __call__(self, seqs):
        pass
        return seqs

 def TransformDemo(base_args, demo1_args, demo2_args):
    transform = T.Compose([
        BaseSilCuttingTransform(**base_args), 
        demo1(args=demo1_args), 
        demo2(args=demo2_args)
    ])
    return transform
  • Step2: Reset the transform arguments in your config file:
transform:
- type: TransformDemo
    base_args: {'img_w': 64}
    demo1_args: false
    demo2_args: false

Visualization

To learn how does the model work, sometimes, you need to visualize the intermediate result.

For this purpose, we have defined a built-in instantiation of torch.utils.tensorboard.SummaryWriter, that is self.msg_mgr.writer, to make sure you can log the middle information everywhere you want.

Demo: if we want to visualize the output feature of baseline's backbone, we could just insert the following codes at baseline.py#L28:

summary_writer = self.msg_mgr.writer
if torch.distributed.get_rank() == 0 and self.training and self.iteration % 100==0:
    summary_writer.add_video('outs', outs.mean(2).unsqueeze(2), self.iteration)

Note that this example requires the moviepy package, and hence you should run pip install moviepy first.