Skip to content

Commit

Permalink
Fix learning rate in docs
Browse files Browse the repository at this point in the history
  • Loading branch information
nikitakit committed Jan 2, 2019
1 parent d5a7f30 commit 8238e79
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion EXPERIMENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ python src/main.py train \
--use-bert --predict-tags \
--model-path-base models/nk_base9_large --bert-model "bert-large-uncased" \
--train-path data/02-21.goldtags --dev-path data/22.goldtags \
--learning-rate 0.0005 --num-layers 2 --batch-size 32 --eval-batch-size 16 --subbatch-max-tokens 500
--learning-rate 0.00005 --num-layers 2 --batch-size 32 --eval-batch-size 16 --subbatch-max-tokens 500
```

Note that the last model enables part-of-speech tag prediction, which requires using a version of the WSJ data that contains gold tags. This data format is not provided in our repository and must be obtained separately. Disabling part-of-speech tag prediction and training on the data provided in this repository should give comparable parsing accuracies (but it's potentially less helpful for downstream use).
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ python src/main.py train --use-elmo --model-path-base models/en_elmo --num-layer
To train an English parser that uses BERT, the command is:

```
python src/main.py train --use-bert --model-path-base models/en_bert --bert-model "bert-large-uncased" --num-layers 2 --learning-rate 0.0005 --batch-size 32 --eval-batch-size 16 --subbatch-max-tokens 500
python src/main.py train --use-bert --model-path-base models/en_bert --bert-model "bert-large-uncased" --num-layers 2 --learning-rate 0.00005 --batch-size 32 --eval-batch-size 16 --subbatch-max-tokens 500
```

### Evaluation Instructions
Expand Down

0 comments on commit 8238e79

Please sign in to comment.