Chikka API integration but on a queue-based workflow. It runs on python, flask, nginx, gunicorn, rabbitmq, celery & mongodb for database(redis if ram wasn't that expensive =( ). Test server runs on Vagrant and workers(gunicorn, celery workers) are daemonized via supervisor.
- create an ec2 instance
- on security group, allow http(port 80)
- copy the installation script(install.sh) to your server
- open the install.sh script
- change the environment variables to your corresponding chikka credentials
- run: source install.sh from home directory
- re-login
- restart nginx
- restart supervisor
- chill...
- on home directory, run source runsms.sh to export the environment variables and execute the virtualenv wrapper
- run celery workers in foreground or daemonize it
text bord on to your shortcode/access number
workers:
celery -A routing.post.notification worker -Q notification --loglevel=info -n notification1.worker.%h
celery -A routing.post.message worker -Q message --loglevel=info -n message1.worker.%h
celery -A routing.post.message worker -Q message --loglevel=info -n message2.worker.%h
celery -A routing.post.inbound.keywords.bord.process worker -Q bord --loglevel=info -n bord1.worker.%h
celery -A slaves.outbound worker -Q outbound --loglevel=info -n outbound.worker.%h
.
sms/
+-- wsgi.py (gunicorn container)
+-- api.py (entrypoint once nginx relayed the request, routes wether to go to)
+-- celeryconfig.py (configurations as well as routings for workers)
+-- mongo.py (mongo connection handler)
+-- slaves/
| +-- message.py (resource handler for chikka's message inbound transaction)
| +-- notification.py (resource handler for chikka's delivery notification)
| +-- reply.py (worker for handling message replies for inbound messages)
| +-- broadcast.py (todo for message broadcasts)
| +-- keywords/
| | +-- project1
| | +-- project2
| | +-- projectN
This software is under GNU General Public License v3