Stars
Start building LLM-empowered multi-agent applications in an easier way.
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
Code for our NAACL-2022 paper DEGREE: A Data-Efficient Generation-Based Event Extraction Model.
This project is an information extraction tool that includes four parts: named entity recognition, event extraction, event causality extraction, and event factual discrimination
Making large AI models cheaper, faster and more accessible
复盘所有NLP比赛的TOP方案,只关注NLP比赛,持续更新中!
中英文敏感词、语言检测、中外手机/电话归属地/运营商查询、名字推断性别、手机号抽取、身份证抽取、邮箱抽取、中日文人名库、中文缩写库、拆字词典、词汇情感值、停用词、反动词表、暴恐词表、繁简体转换、英文模拟中文发音、汪峰歌词生成器、职业名称词库、同义词库、反义词库、否定词库、汽车品牌词库、汽车零件词库、连续英文切割、各种中文词向量、公司名字大全、古诗词库、IT词库、财经词库、成语词库、地名词库、…
A TensorFlow Implementation of the Transformer: Attention Is All You Need
Code of Directional Self-Attention Network (DiSAN)
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Code and model files for the paper: "A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction" (AAAI-18).
[2017知乎看山杯 多标签 文本分类] ye组(第六名) 解题方案