Skip to content

mxnet version of Large-Margin Softmax Loss for Convolutional Neural Networks.

License

Notifications You must be signed in to change notification settings

luoyetx/mx-lsoftmax

Folders and files

NameName
Last commit message
Last commit date

Latest commit

194f49e · Aug 31, 2017

History

13 Commits
Dec 28, 2016
Jan 8, 2017
Dec 28, 2016
Feb 22, 2017
Dec 30, 2016
Dec 30, 2016
Jan 8, 2017
Dec 28, 2016
Aug 31, 2017
Jan 8, 2017
Jan 8, 2017
Jan 5, 2017

Repository files navigation

mx-lsoftmax

mxnet version of Large-Margin Softmax Loss for Convolutional Neural Networks.

Derivatives

I put all formula I used to calculate the derivatives below. You can check it by yourself. If there's a mistake, please do tell me or open an issue.

The derivatives doesn't include lambda in the paper, but the code does. Instead of using lambda to weight the original f_i_yi = |w_yi||x_i|cos(t), the code uses beta because lambda is a keyword for Python, but beta plays exactly the same role as lambda.

formula

Gradient Check

Gradient check can be failed with data type float32 but ok with data type float64. So don't afraid to see gradient check failed.

Operator Performance

I implement the operator both in Python and C++(CUDA). The performance below is training LeNet on a single GTX1070 with parameters margin = 4, beta = 1. Notice the C++ implement can only run on GPU context.

Batch Size traditional fully connected lsoftmax in Python lsoftmax in C++(CUDA)
128 ~45000 samples / sec 2800 ~ 3300 samples / sec ~40000 samples / sec
256 ~54000 samples / sec 3500 ~ 4200 samples / sec ~47000 samples / sec

Visualization

original softmax (traditional fully connected)

lsoftmax-margin-1

lsoftmax with margin = 2, beta = 100 and scale = 0.99

lsoftmax-margin-2

lsoftmax with margin = 3, beta = 100 and scale = 0.99

lsoftmax-margin-3

lsoftmax with margin = 4, beta = 100 and scale = 0.99

lsoftmax-margin-4

lsoftmax with margin = 6, beta = 1000 and scale = 0.99

lsoftmax-margin-6

References

About

mxnet version of Large-Margin Softmax Loss for Convolutional Neural Networks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published