Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.
Example:
Mark | Watney | visited | Mars |
---|---|---|---|
B-PER | I-PER | O | B-LOC |
The CoNLL 2003 NER task consists of newswire text from the Reuters RCV1 corpus tagged with four different entity types (PER, LOC, ORG, MISC). Models are evaluated based on span-based F1 on the test set.
The WNUT 2017 Emerging Entities task operates over a wide range of English text and focuses on generalisation beyond memorisation in high-variance environments. Scores are given both over entity chunk instances, and unique entity surface forms, to normalise the biasing impact of entities that occur frequently.
Feature | Train | Dev | Test |
---|---|---|---|
Posts | 3,395 | 1,009 | 1,287 |
Tokens | 62,729 | 15,733 | 23,394 |
NE tokens | 3,160 | 1,250 | 1,589 |
The data is annotated for six classes - person, location, group, creative work, product and corporation.
Links: WNUT 2017 Emerging Entity task page (including direct download links for data and scoring script)
Model | F1 | F1 (surface form) | Paper / Source |
---|---|---|---|
Flair embeddings (Akbik et al., 2018) | 50.20 | Contextual String Embeddings for Sequence Labeling / Flair framework | |
Aguilar et al. (2018) | 45.55 | Modeling Noisiness to Recognize Named Entities using Multitask Neural Networks on Social Media | |
SpinningBytes | 40.78 | 39.33 | Transfer Learning and Sentence Level Features for Named Entity Recognition on Tweets |
The Ontonotes corpus v5 is a richly annotated corpus with several layers of annotation, including named entities, coreference, part of speech, word sense, propositions, and syntactic parse trees. These annotations are over a large number of tokens, a broad cross-section of domains, and 3 languages (English, Arabic, and Chinese). The NER dataset (of interest here) includes 18 tags, consisting of 11 types (PERSON, ORGANIZATION, etc) and 7 values (DATE, PERCENT, etc), and contains 2 million tokens. The common datasplit used in NER is defined in Pradhan et al 2013 and can be found here.
Model | F1 | Paper / Source | Code |
---|---|---|---|
Flair embeddings (Akbik et al., 2018) | 89.71 | Contextual String Embeddings for Sequence Labeling | Official |
CVT + Multi-Task (Clark et al., 2018) | 88.81 | Semi-Supervised Sequence Modeling with Cross-View Training | Official |
Bi-LSTM-CRF + Lexical Features (Ghaddar and Langlais 2018) | 87.95 | Robust Lexical Features for Improved Neural Network Named-Entity Recognition | |
BiLSTM-CRF (Strubell et al, 2017) | 86.99 | Fast and Accurate Entity Recognition with Iterated Dilated Convolutions | Official |
Iterated Dilated CNN (Strubell et al, 2017) | 86.84 | Fast and Accurate Entity Recognition with Iterated Dilated Convolutions | Official |
Joint Model (Durrett and Klein 2014) | 84.04 | A Joint Model for Entity Analysis: Coreference, Typing, and Linking | |
Averaged Perceptron (Ratinov and Roth 2009) | 83.45 | Design Challenges and Misconceptions in Named Entity Recognition (These scores reported in (Durrett and Klein 2014)) | Official |