Skip to content

Commit

Permalink
Added a section for Multimodal Metaphor Recognition
Browse files Browse the repository at this point in the history
Added a section for some of the research on Multimodal (specifically visual and text) Metaphor Recognition.
  • Loading branch information
TomLisankie committed Jun 27, 2018
1 parent eb177bc commit a26e17b
Showing 1 changed file with 13 additions and 1 deletion.
14 changes: 13 additions & 1 deletion multimodal.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,16 @@ The MOSI dataset ([Zadeh et al., 2016](https://arxiv.org/pdf/1606.06259.pdf)) is
| bc-LSTM (Poria et al., 2017) | 80.3% | [Context-Dependent Sentiment Analysis in User-Generated Videos](http://sentic.net/context-dependent-sentiment-analysis-in-user-generated-videos.pdf) |
| MARN (Zadeh et al., 2018) | 77.1% | [Multi-attention Recurrent Network for Human Communication Comprehension](https://arxiv.org/pdf/1802.00923.pdf) |

[Go back to the README](README.md)
## Multimodal Metaphor Recognition

[Mohammad et. al, 2016](http://www.aclweb.org/anthology/S16-2003) created a dataset of verb-noun pairs from WordNet that had multiple senses. They annoted these pairs for metaphoricity (metaphor or not a metaphor).

[Tsvetkov et. al, 2014](http://www.aclweb.org/anthology/P14-1024) created a dataset of adjective-noun pairs that they then annotated for metaphoricity.

Both datasets are in English.

| Model | Score | Paper / Source | Code |
| ------------------------------------------------------------ | :----------: | ------------------------------------------------------------ | ----------- |
| 5-layer convolutional network (Krizhevsky et al., 2012), Word2Vec | 75.0%, 79.0% | [Shutova et. al, 2016](http://www.aclweb.org/anthology/N16-1020) | Unavailable |

[Go back to the README](README.md)

0 comments on commit a26e17b

Please sign in to comment.