forked from facebookresearch/ParlAI
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Two inaugural FAQ questions (facebookresearch#3073)
* Two inaugural FAQ questions * Third question
- Loading branch information
1 parent
746e829
commit a3f581c
Showing
1 changed file
with
15 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,18 @@ | ||
# Frequently Asked Questions | ||
|
||
This document contains a number of questions that are regularly asked on GitHub | ||
Issues. | ||
This document contains a number of questions that are regularly asked on GitHub Issues. | ||
|
||
|
||
**Why is my model not generating a response?** | ||
|
||
For a generative model, check that `--skip-generation` is set to `False`. | ||
|
||
|
||
**Why can't I reproduce the results of an evaluation on a task with a pretrained model?** | ||
|
||
One common culprit for this is that the flags for that task may not be correctly set. When loading a pretrained checkpoint, all of the parameters for the model itself will be loaded from the model's `.opt` file, but all task-specific parameters will need to be re-specified. | ||
|
||
|
||
**Why is my generative model's perplexity so high (>1000) when evaluating?** | ||
|
||
One first thing to check is whether there is a problem with your dictionary or token embeddings, because this high perplexity implies that the model is very bad at predicting the next token in a string of text. |