Skip to content

Commit

Permalink
equations fixed
Browse files Browse the repository at this point in the history
  • Loading branch information
adambielski committed Mar 6, 2018
1 parent 4ed17ef commit 6f180ca
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 5 deletions.
5 changes: 3 additions & 2 deletions Experiments_FashionMNIST.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
]
},
"colab_type": "code",
"collapsed": true,
"executionInfo": {
"elapsed": 3528,
"status": "ok",
Expand Down Expand Up @@ -507,7 +508,7 @@
"# Siamese network\n",
"Now we'll train a siamese network that takes a pair of images and trains the embeddings so that the distance between them is minimized if their from the same class or greater than some margin value if they represent different classes.\n",
"We'll minimize a contrastive loss function*:\n",
"$$L_{contrastive}(x_0, x_1, y) = \\frac{1}{2} y \\lVert f(x_0)-f(x_1)\\rVert_2^2 + \\frac{1}{2}(1-y)\\{max(0, m-\\lVert f(x_0)-f(x_1)\\rVert_2\\}^2$$\n",
"$$L_{contrastive}(x_0, x_1, y) = \\frac{1}{2} y \\lVert f(x_0)-f(x_1)\\rVert_2^2 + \\frac{1}{2}(1-y)\\{max(0, m-\\lVert f(x_0)-f(x_1)\\rVert_2)\\}^2$$\n",
"\n",
"*Raia Hadsell, Sumit Chopra, Yann LeCun, [Dimensionality reduction by learning an invariant mapping](http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf), CVPR 2006*"
]
Expand Down Expand Up @@ -840,7 +841,7 @@
"![alt text](images/anchor_negative_positive.png \"Source: FaceNet\")\n",
"Source: [2] *Schroff, Florian, Dmitry Kalenichenko, and James Philbin. [Facenet: A unified embedding for face recognition and clustering.](https://arxiv.org/abs/1503.03832) CVPR 2015.*\n",
"\n",
"**Triplet loss**: $L_{triplet}(x_a, x_p, n) = m + \\lVert f(x_a)-f(x_p)\\rVert_2^2 - \\lVert f(x_a)-f(x_n)\\rVert_2^2$"
"**Triplet loss**: $L_{triplet}(x_a, x_p, x_n) = m + \\lVert f(x_a)-f(x_p)\\rVert_2^2 - \\lVert f(x_a)-f(x_n)\\rVert_2^2$"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion Experiments_MNIST.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@
"# Siamese network\n",
"Now we'll train a siamese network that takes a pair of images and trains the embeddings so that the distance between them is minimized if their from the same class or greater than some margin value if they represent different classes.\n",
"We'll minimize a contrastive loss function*:\n",
"$$L_{contrastive}(x_0, x_1, y) = \\frac{1}{2} y \\lVert f(x_0)-f(x_1)\\rVert_2^2 + \\frac{1}{2}(1-y)\\{max(0, m-\\lVert f(x_0)-f(x_1)\\rVert_2\\}^2$$\n",
"$$L_{contrastive}(x_0, x_1, y) = \\frac{1}{2} y \\lVert f(x_0)-f(x_1)\\rVert_2^2 + \\frac{1}{2}(1-y)\\{max(0, m-\\lVert f(x_0)-f(x_1)\\rVert_2)\\}^2$$\n",
"\n",
"*Raia Hadsell, Sumit Chopra, Yann LeCun, [Dimensionality reduction by learning an invariant mapping](http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf), CVPR 2006*"
]
Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,8 @@ While the embeddings look separable (which is what we trained them for), they do

Now we'll train a siamese network that takes a pair of images and trains the embeddings so that the distance between them is minimized if they're from the same class and is greater than some margin value if they represent different classes.
We'll minimize a contrastive loss function [1]:
$$L_{contrastive}(x_0, x_1, y) = \frac{1}{2} y \lVert f(x_0)-f(x_1)\rVert_2^2 + \frac{1}{2}(1-y)\{max(0, m-\lVert f(x_0)-f(x_1)\rVert_2\}^2$$

![](images/contrastive_loss.png)

*SiameseMNIST* class samples random positive and negative pairs that are then fed to Siamese Network.

Expand All @@ -81,7 +82,7 @@ We'll train a triplet network, that takes an anchor, a positive (of same class a
![alt text](images/anchor_negative_positive.png "Source: FaceNet")
Source: *Schroff, Florian, Dmitry Kalenichenko, and James Philbin. [Facenet: A unified embedding for face recognition and clustering.](https://arxiv.org/abs/1503.03832) CVPR 2015.*

**Triplet loss**: $L_{triplet}(x_a, x_p, x_n) = m + \lVert f(x_a)-f(x_p)\rVert_2^2 - \lVert f(x_a)-f(x_n)\rVert_2^2$
**Triplet loss**: ![](images/triplet_loss.png)

*TripletMNIST* class samples a positive and negative example for every possible anchor.

Expand Down
Binary file added images/contrastive_loss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/triplet_loss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 6f180ca

Please sign in to comment.