- Sensor data transmitted via MQTT
- Logstash consumes MQTT input (augments events with site/sensor data such as geolocation)
- Elasticsearch data lake
- [optional] Serverless API, S3-hosted web interface
- [optional] kafka or kinesis for sending alerts onward or processing events
- Sensors communicate via WiFi or LPWAN
- On-site concentrator
- Site concentrator relays to MQTT (or sensors may go direct)
- Mosquitto in docker on EC2 instance
- Logstash in docker on EC2 instance
- Elasticsearch hosted by cloud.elastic.co or on AWS
The docker-compose-live.yml file sets up components 2+3
- Dev Sensors transmit direct to MQTT (skip LPWAN)
- Mosquitto in docker
- Logstash in docker
- Elasticsearch in docker
- Kibana in docker
The docker-compose.yml file sets up components 2,3,4,5
Search for 'example.com' and replace any instances with your names.
Just run "make setup".
The "make setup" target does the below steps:
- Start the data components
docker-compose up -d elasticsearch kibana broker
docker-compose logs -f
- Wait until elasticsearch is running
Look for "kibana | ...Status changed from yellow to green - Ready""
- Import configuration data
for s in scripts/*.sh ; do $s localhost ; done
- Start the importer (which needs the above config)
docker-compose up -d logstash
- Navigate to localhost:5601 and conifigure kibana for index pattern "data-*"
Once the configuration import is done in your elastic docker,
you can start and stop the whole enchilada with make start
and make stop
(password 'changeme' is a placeholder for actual password)
('eshost' is a placeholder for the hostname of the ES server)
. config.inc && for cript in scripts/* ; do $cript ; done