Skip to content
/ NBN Public

[IEEE TIP 2024] Normalizing Batch Normalization for Long-Tailed Recognition

Notifications You must be signed in to change notification settings

yuxiangbao/NBN

Repository files navigation

Normalizing Batch Normalization for Long-Tailed Recognition

 GitHub stars arXiv

🎯 Snapshot

class BatchNorm2d(nn.BatchNorm2d):        
    def scale(self, alpha):
        self.w = self.weight / (torch.norm(self.weight)+1e-6) * alpha
        if self.bias is not None:
            self.b = self.bias / (torch.norm(self.bias)+1e-6) * alpha
        else:
            self.b = self.bias
            
    def forward(self, input):
        self._check_input_dim(input)
        w = self.w if hasattr(self, 'w') else self.weight
        b = self.b if hasattr(self, 'b') else self.bias
        x = F.batch_norm(
            input,
            self.running_mean
            if not self.training or self.track_running_stats
            else None,
            self.running_var if not self.training or self.track_running_stats else None,
            w,
            b,
            bn_training,
            exponential_average_factor,
            self.eps,
        )
        return x

🛠️ Installation

pip install -r requirements.txt

🚀 Training and Validation

1. Training Data Preparation

Before training, please download the datasets following Kang. Then, update the data_root parameter in the YAML configuration files found in the ./config directory.

2. Training

We provide the launch.sh script to initiate the training process. This script supports both single-GPU and multi-GPU training using FP32 and FP16 precision modes. For multi-GPU setups, it supports both Data Parallel (DP) and Distributed Data Parallel (DDP).

# training with single gpu
python main.py /path/to/config --gpu $gpu_id --amp

# training with data parallel
python main.py /path/to/config --amp

# training with distributed data parallel
python main.py /path/to/config -d --world-size 1 --rank 0 --amp

3. Validation

Likewise, the code also support single-gpu and multi-gpu validation. For multi-GPU validation, we use Distributed Data Parallel (DDP) as an example.

python main.py /path/to/config -d --world-size 1 --rank 0 --amp -e --pretrain /path/to/checkpoints

The checkpoints can be freely downloaded from Google Drive or Baidu Pan.

⭐ Cite

If you find this project useful in your research, we appreciate your star and citation of our work:

@article{bao2024normalizing,
  title={Normalizing Batch Normalization for Long-Tailed Recognition},
  author={Bao, Yuxiang and Kang, Guoliang and Yang, Linlin and Duan, Xiaoyue and Zhao, Bo and Zhang, Baochang},
  journal={IEEE Transactions on Image Processing},
  year={2024},
  publisher={IEEE}
}

🎖️ Acknowledgement

This work is built upon the decoupling cRT, Balanced Softmax, RoBal, and supsup.

🦄 Contact

Please contact @yuxiangbao for questions, comments and reporting bugs.

About

[IEEE TIP 2024] Normalizing Batch Normalization for Long-Tailed Recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published