This nlp-models project is a tutorial for who is studying NLP(Natural Language Processing) using TensorFlow , Pytorch and keras , most of the models in NLP were implemented with less than 200 lines of code.
- 1-1. NNLM(Neural Network Language Model) - Predict Next Word
- 1-2. Word2Vec(Skip-gram) - Embedding Words and Show Graph
- 1-3. FastText(Application Level) - Sentence Classification
- 2-1. TextCNN - Binary Sentiment Classification
- 2-2. DCNN(Dynamic Convolutional Neural Network)
- 3-1. TextRNN - Predict Next Step
- Paper - Finding Structure in Time(1990)
- 3-2. TextLSTM - Autocomplete
- Paper - LONG SHORT-TERM MEMORY(1997)
- 3-3. Bi-LSTM - Predict Next Word in Long Sentence
- 4-1. Seq2Seq - Change Word
- 4-2. Seq2Seq with Attention - Translate
- 4-3. Bi-LSTM with Attention - Binary Sentiment Classification
- 5-1. The Transformer - Translate
- Paper - Attention Is All You Need(2017)
- 5-2. BERT - Classification Next Sentence & Predict Masked Tokens
Model | Example | Framework |
---|---|---|
NNLM | Predict Next Word | Torch, Tensor |
Word2Vec(Softmax) | Embedding Words and Show Graph | Torch, Tensor |
TextCNN | Sentence Classification | Torch, Tensor |
TextRNN | Predict Next Step | Torch, Tensor |
TextLSTM | Autocomplete | Torch, Tensor |
Bi-LSTM | Predict Next Word in Long Sentence | Torch, Tensor |
Seq2Seq | Change Word | Torch, Tensor |
Seq2Seq with Attention | Translate | Torch, Tensor |
Bi-LSTM with Attention | Binary Sentiment Classification | Torch, Tensor |
Transformer | Translate | Torch |
Greedy Decoder Transformer | Translate | Torch |
BERT | how to train | Torch |
- Python 3.5+
- Tensorflow 1.12.0+
- Pytorch 0.4.1+
- Add some new model
- Add keras version
- Adding more English annotations
- Enriching training examples
- Adding larger data sets to measure model effectiveness