Stars
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.
Take neural networks as APIs for human-like AI.
Visualizer for neural network, deep learning and machine learning models
Officially maintained, supported by PaddlePaddle, including CV, NLP, Speech, Rec, TS, big models and so on.
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
CoreNLP: A Java suite of core NLP tools for tokenization, sentence segmentation, NER, parsing, coreference, sentiment analysis, etc.
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
TensorFlow Neural Machine Translation Tutorial
TensorFlow code and pre-trained models for BERT
Phrase-Based & Neural Unsupervised Machine Translation
A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
all kinds of text classification models and more with deep learning
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
Rasa Core is now part of the Rasa repo: An open source machine learning framework to automate text-and voice-based conversations
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning …
A complete and graceful API for Wechat. 微信个人号接口、微信机器人及命令行微信,三十行即可自定义个人号机器人。
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系[email protected] 版权所有,违权必究 Tan 2018.06
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.