If you are interested in further enhancements and investigations, just watch Next repo.
https://github.com/Samurais/Neural_Conversation_Models
This repository is align with Part 3: Bot Model.
Train and serve QA Model with TensorFlow
Tested with TensorFlow#0.11.0rc2, Python#3.5.
Install Nvidia Drivers, CUDNn, Python, TensorFlow on Ubuntu 16.04
Inspired and inherited from DeepQA.
pip install -r requirements.txt
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc2-cp35-cp35m-linux_x86_64.whl
pip install —-upgrade $TF_BINARY_URL
Process data, build vocabulary, word embedding, conversations, etc.
cp config.sample.ini config.ini
python deepqa2/dataset/preprocesser.py
Sample Corpus http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html
Train language model with Seq2seq.
cp config.sample.ini config.ini # modify keys
python deepqa2/train.py
Provide RESt API to access language model.
cd DeepQA2/save/deeplearning.cobra.vulcan.20170127.175256/deepqa2/serve
cp db.sample.sqlite3 db.sqlite3
python manage.py runserver 0.0.0.0:8000
Access Service with RESt API
POST /api/v1/question HTTP/1.1
Host: 127.0.0.1:8000
Content-Type: application/json
Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM=
Cache-Control: no-cache
{"message": "good to know"}
response
{
"rc": 0,
"msg": "hello"
}
docker pull samurais/deepqa2:latest
cd DeepQA2
./scripts/train_with_docker.sh