Skip to content

Commit

Permalink
Merge pull request lisa-lab#86 from abergeron/lstm_float16
Browse files Browse the repository at this point in the history
Use bigger epsilon for float16 so that it does not become 0.
  • Loading branch information
nouiz committed May 25, 2015
2 parents 777c75a + cfba0cd commit b0dd8f0
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion code/lstm.py
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,11 @@ def build_model(tparams, options):
f_pred_prob = theano.function([x, mask], pred, name='f_pred_prob')
f_pred = theano.function([x, mask], pred.argmax(axis=1), name='f_pred')

cost = -tensor.log(pred[tensor.arange(n_samples), y] + 1e-8).mean()
off = 1e-8
if pred.dtype == 'float16':
off = 1e-6

cost = -tensor.log(pred[tensor.arange(n_samples), y] + off).mean()

return use_noise, x, mask, y, f_pred_prob, f_pred, cost

Expand Down

0 comments on commit b0dd8f0

Please sign in to comment.