forked from HIT-SCIR/ltp
-
Notifications
You must be signed in to change notification settings - Fork 0
/
CITATION.cff
68 lines (68 loc) · 4.18 KB
/
CITATION.cff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
cff-version: 1.2.0
title: >-
N-LTP: An Open-source Neural Language Technology Platform for Chinese
message: >-
If you use this software, please cite using the
following metadata.
type: software
authors:
- given-names: Wanxiang
family-names: Che
affiliation: SCIR
email: [email protected]
- given-names: Yunlong
family-names: Feng
affiliation: SCIR
email: [email protected]
- given-names: Libo
family-names: Qin
affiliation: SCIR
email: [email protected]
- given-names: Ting
family-names: Liu
affiliation: Meta
email: [email protected]
doi: 10.18653/v1/2021.emnlp-demo.6
url: "https://github.com/HIT-SCIR/ltp"
repository-code: 'https://github.com/HIT-SCIR/ltp'
abstract: >-
We introduce N-LTP, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: lexical analysis (Chinese word segmentation, part-of-speech tagging, and named entity recognition), syntactic parsing (dependency parsing), and semantic parsing (semantic dependency parsing and semantic role labeling). Unlike the existing state-of-the-art toolkits, such as Stanza, that adopt an independent model for each task, N-LTP adopts the multi-task framework by using a shared pre-trained model, which has the advantage of capturing the shared knowledge across relevant Chinese tasks. In addition, a knowledge distillation method (Clark et al., 2019) where the single-task model teaches the multi-task model is further introduced to encourage the multi-task model to surpass its single-task teacher. Finally, we provide a collection of easy-to-use APIs and a visualization tool to make users to use and view the processing results more easily and directly. To the best of our knowledge, this is the first toolkit to support six Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at https://github.com/HIT-SCIR/ltp.
keywords:
- 'neural network, natural language, Chinese, multi-task learning, knowledge distillation'
version: "4.0"
date-released: 2020-06-14
identifiers:
- type: url
value: "https://github.com/HIT-SCIR/ltp"
description: The GitHub repo url
preferred-citation:
type: article
authors:
- given-names: Wanxiang
family-names: Che
affiliation: SCIR
email: [email protected]
- given-names: Yunlong
family-names: Feng
affiliation: SCIR
email: [email protected]
- given-names: Libo
family-names: Qin
affiliation: SCIR
email: [email protected]
- given-names: Ting
family-names: Liu
affiliation: Meta
email: [email protected]
booktitle: "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations"
url: "https://aclanthology.org/2021.emnlp-demo.6"
doi: "10.18653/v1/2021.emnlp-demo.6"
publisher: "Association for Computational Linguistics"
month: 9
year: 2020
address: "Online and Punta Cana, Dominican Republic"
start: 42
end: 49
title: "N-LTP: An Open-source Neural Language Technology Platform for Chinese"
abstract: >-
We introduce N-LTP, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: lexical analysis (Chinese word segmentation, part-of-speech tagging, and named entity recognition), syntactic parsing (dependency parsing), and semantic parsing (semantic dependency parsing and semantic role labeling). Unlike the existing state-of-the-art toolkits, such as Stanza, that adopt an independent model for each task, N-LTP adopts the multi-task framework by using a shared pre-trained model, which has the advantage of capturing the shared knowledge across relevant Chinese tasks. In addition, a knowledge distillation method (Clark et al., 2019) where the single-task model teaches the multi-task model is further introduced to encourage the multi-task model to surpass its single-task teacher. Finally, we provide a collection of easy-to-use APIs and a visualization tool to make users to use and view the processing results more easily and directly. To the best of our knowledge, this is the first toolkit to support six Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at https://github.com/HIT-SCIR/ltp.