Semantic parsing is the task of translating natural language into a formal meaning representation on which a machine can act. Representations may be an executable language such as SQL or more abstract representations such as Abstract Meaning Representation (AMR).
Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.
13,051 sentences
Models are evaluated on the newswire section and the full dataset based on smatch. Systems marked with * are pipeline systems that require other systems (i.e. a dependency parse or a SRL parse) as input.
Model | F1 Newswire | F1 Full | Paper / Source |
---|---|---|---|
Incremental joint model (Zhou et al., 2016)* | 0.71 | 0.66 | AMR Parsing with an Incremental Joint Model |
Transition-based transducer (Wang et al., 2015)* | 0.70 | 0.66 | Boosting Transition-based AMR Parsing with Refined Actions and Auxiliary Analyzers |
Imitation learning (Goodman et al., 2016)* | 0.70 | -- | Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing |
MT-Based (Pust et al., 2015)* | -- | 0.66 | Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation |
Transition-based parser-Stack-LSTM (Ballesteros and Al-Onaizan, 2017)* | 0.69 | 0.64 | AMR Parsing using Stack-LSTMs |
Transition-based parser-Stack-LSTM (Ballesteros and Al-Onaizan, 2017) | 0.68 | 0.63 | AMR Parsing using Stack-LSTMs |
19,572 sentences
Models are evaluated based on smatch.
Model | Smatch | Paper / Source |
---|---|---|
Joint model (Lyu and Titov, 2018) | 73.7 | AMR Parsing as Graph Prediction with Latent Alignment |
Mul-BiLSTM (Foland and Martin, 2017) | 70.7 | Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks |
JAMR (Flanigan et al., 2016) | 67.0 | CMU at SemEval-2016 Task 8: Graph-based AMR Parsing with Infinite Ramp Loss |
CAMR (Wang et al., 2016) | 66.5 | CAMR at SemEval-2016 Task 8: An Extended Transition-based AMR Parser |
AMREager (Damonte et al., 2017) | 64.0 | An Incremental Parser for Abstract Meaning Representation |
SEQ2SEQ + 20M (Konstas et al., 2017) | 62.1 | Neural AMR: Sequence-to-Sequence Models for Parsing and Generation |
39,260 sentences
Results are computed over 8 runs. Models are evaluated based on smatch.
Model | Smatch | Paper / Source |
---|---|---|
Joint model (Lyu and Titov, 2018) | 74.4 | AMR Parsing as Graph Prediction with Latent Alignment |
ChSeq + 100K (van Noord and Bos, 2017) | 71.0 | Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations |
Neural-Pointer (Buys and Blunsom, 2017) | 61.9 | Oxford at SemEval-2017 Task 9: Neural AMR Parsing with Pointer-Augmented Attention |
The WikiSQL dataset consists of 87,673 examples of questions, SQL queries, and database tables built from 26,521 tables. Train/dev/test splits are provided so that each table is only in one split. Models are evaluated based on accuracy on execute result matches.
Example:
Question | SQL query |
---|---|
How many engine types did Val Musetti use? | SELECT COUNT Engine WHERE Driver = Val Musetti |
Model | Acc ex | Paper / Source |
---|---|---|
TypeSQL+TC (Yu et al., 2018) | 82.6 | TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation |
SQLNet (Xu et al., 2017) | 68.0 | Sqlnet: Generating structured queries from natural language without reinforcement learning |
Seq2SQL (Zhong et al., 2017) | 59.4 | Seq2sql: Generating structured queries from natural language using reinforcement learning |