Skip to content

Commit

Permalink
Added missing log function on week 7. (#826)
Browse files Browse the repository at this point in the history
* [ar] Added missing log function

* [es] Added missing log function

* [es] Added missing log

* [FR] Added missing log function.

* [IT] Added missing log function.

* [PT] Added missing log function.

* [KO] Added missing log function.

* [TR] Added missing log function.

* [ZH] Added missing log function.

* [EN] Added missing log function.

Look at the lecture videos presentation.
  • Loading branch information
Jerry-Master authored Aug 9, 2022
1 parent 5ad417c commit c42d8e7
Show file tree
Hide file tree
Showing 9 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion docs/ar/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ $$
أقصى احتمال يحاول جعل البسط كبيرًا والمقام صغيرًا لزيادة الاحتمالية. هذا يعادل تقليص $-\log(P(Y \mid W))$ كما هو موضح أدناه

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

حساب الانحدار لخسارة احتمالية اللوغاريتم السلبي لعينة واحدة من Y يكون كالتالي:
Expand Down
2 changes: 1 addition & 1 deletion docs/en/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ $$
Maximum likelihood tries to make the numerator big and the denominator small to maximize the likelihood. This is equivalent to minimizing $-\log(P(Y \mid W))$ which is given below

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

Gradient of the negative log likelihood loss for one sample Y is as follows:
Expand Down
4 changes: 2 additions & 2 deletions docs/es/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ $$
Maximum likelihood tries to make the numerator big and the denominator small to maximize the likelihood. This is equivalent to minimizing $-\log(P(Y \mid W))$ which is given below
$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta} \log \int_{y}e^{-\beta E(y,W)}
$$
Gradient of the negative log likelihood loss for one sample Y is as follows:
Expand Down Expand Up @@ -154,7 +154,7 @@ La verosimilitud máxima intenta hacer que el numerador sea grande y el denomina


$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

El gradiente de la funcion de pérdida de probabilidad logarítmica negativa para una muestra Y es el siguiente:
Expand Down
2 changes: 1 addition & 1 deletion docs/fr/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ $$
Le maximum de vraisemblance essaie de rendre le numérateur grand et le dénominateur petit pour maximiser la probabilité. Cela équivaut à minimiser $-\log(P(Y \mid W))$ qui est donné ci-dessous :

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

Le gradient de la perte de la vraisemblance logarithmique négative pour un échantillon Y est le suivant :
Expand Down
2 changes: 1 addition & 1 deletion docs/it/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Maximum likelihood tries to make the numerator big and the denominator small to
-->

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

Gradient of the negative log likelihoood loss for one sample Y is as follows:
Expand Down
2 changes: 1 addition & 1 deletion docs/ko/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ $$
최대 우도는 분자를 크게하고, 분모를 작게해서 우도 값을 최대화 하고자 한다. 이는 아래에서 주어진 것과 같이 $-\log(P(Y \mid W))$ 을 최소화 하는 것과 같다.

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

<!-- Gradient of the negative log likelihoood loss for one sample Y is as follows: -->
Expand Down
2 changes: 1 addition & 1 deletion docs/pt/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ $$
A máxima verossimilhança tenta tornar o numerador grande e o denominador pequeno para maximizar a verossimilhança. Isso é equivalente a minimizar $-\log(P(Y \mid W))$ que é dado abaixo

<!--$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$
-->

Expand Down
2 changes: 1 addition & 1 deletion docs/tr/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ En Büyük Olabilirlik, olabilirliği maksimuma çıkarmak için payı büyütü
<!--Maximum likelihood tries to make the numerator big and the denominator small to maximize the likelihood. This is equivalent to minimizing $-\log(P(Y \mid W))$ which is given below-->

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

Bir Y örneği için negatif log olabilirlik kaybının gradyanı aşağıdaki gibi:
Expand Down
2 changes: 1 addition & 1 deletion docs/zh/week07/07-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ $$
最大似然尝试令分子变大,而分母变细来最大化似然。这是跟最小化 $-\log(P(Y \mid W))$ 一样,也就是以下这样

$$
L(Y, W) = E(Y,W) + \frac{1}{\beta}\int_{y}e^{-\beta E(y,W)}
L(Y, W) = E(Y,W) + \frac{1}{\beta}\log\int_{y}e^{-\beta E(y,W)}
$$

对于一个样本 Y ,负对数似然损失的的梯度(Gradient of the negative log likelihood loss)是如下:
Expand Down

0 comments on commit c42d8e7

Please sign in to comment.