This is tensorflow implementation of "Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020. Do we need zero training loss after achieving zero training error? In Proceedings of the 37th International Conference on Machine Learning (ICML'20). JMLR.org, Article 428, 4604–4614"
Scripts for MNIST, Fashion-MNIST and KMNIST are present in the models folder. This work can be extended in numerous ways namely "Implementing Flooding algorithm for Deep CNNs" or "Training while varying Flooding constant".
Proposed Objective Function:
Following figures represent the performance of MLP Classifier for the Fashion-MNIST dataset when trained with and without Flooding. We notice that while for the model trained without Flooding the testing loss diverges away but, by using Flooding the testing loss stablizes.
Comparing Accuracies | Comparing Losses |
---|---|
![]() |
![]() |
- keras==2.8.0
- matplotlib==3.5.2
- numpy==1.22.3
- tensorflow==2.8.0
Run the following command for a demo of Flooding on MNIST dataset
python demo.py
- Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, and Masashi Sugiyama. 2020. Do we need zero training loss after achieving zero training error? In Proceedings of the 37th International Conference on Machine Learning (ICML'20). JMLR.org, Article 428, 4604–4614
- https://www.tensorflow.org/