Dependency parsing is the task of extracting a dependency parse of a sentence that represents its grammatical structure and defines the relationships between "head" words and words, which modify those heads.
Example:
root
|
| +-------dobj---------+
| | |
nsubj | | +------det-----+ | +-----nmod------+
+--+ | | | | | | |
| | | | | +-nmod-+| | | +-case-+ |
+ | + | + + || + | + | |
I prefer the morning flight through Denver
Relations among the words are illustrated above the sentence with directed, labeled arcs from heads to dependents (+ indicates the dependent).
Models are evaluated on the Stanford Dependency conversion (v3.3.0) of the Penn Treebank with predicted POS-tags. Punctuation symbols are excluded from the evaluation. Evaluation metrics are unlabeled attachment score (UAS) and labeled attachment score (LAS). Here, we also mention the predicted POS tagging accuracy.
The following results are just for references:
Model | UAS | LAS | Paper / Source | Note |
---|---|---|---|---|
Stack-only RNNG (Kuncoro et al., 2017) | 95.8 | 94.6 | What Do Recurrent Neural Network Grammars Learn About Syntax? | Constituent parser |
Semi-supervised LSTM-LM (Choe and Charniak, 2016) (Constituent parser) | 95.9 | 94.1 | Parsing as Language Modeling | Constituent parser |
Deep Biaffine (Dozat and Manning, 2017) | 95.66 | 94.03 | Deep Biaffine Attention for Neural Dependency Parsing | Stanford conversion v3.5.0 |