ELK stands for Elasticsearch 2, Logstash 2 and Kibana 4 and is being promoted by Elasticsearch as a "devops" logging solution.
This implemenation of an ELK stack is designed to run in AWS EC2 VPC and is secured using Google OAuth 2.0. It consists of one or more instances behind an Elastic Load Balancer (ELB) running the following components:
- Kibana 4.x
- Elasticsearch 2.x
- Logstash 2.x indexer
- Node.js application proxy
Only the Logstash indexer and the application proxy ports are exposed on the ELB and all requests to the application proxy for Kibana or Elasticsearch are authenticated using Google OAuth.
Elasticsearch is configured to listen only on the local loopback address. Dynamic scripting has been disabled to address security concerns with remote code execution since elasticsearch version 1.4.3.
The ELB requires a healthcheck to ensure instances in the load balancer are healthy. To achieve this, access to the root URL for Elasticsearch is available at the path /__es
and it is not authenticated.
Shipping logs to the ELK stack are left as an exercise for the user however example configurations are included in the repo under the /examples
directory. TBC
A very simple one that reads from stdin and tails a log file then echoes to stdout and forwards to the ELK stack is below:
$ logstash --debug -e '
input { stdin { } file { path => "/var/log/system.log" } }
output { stdout { } tcp { host => "INSERT-ELB-DNS-NAME-HERE" port => 6379 codec => json_lines } }'
This ELK stack assumes your AWS VPC is configured as per AWS guidelines which is to have a public and private subnet in each availability zone for the region. See Your VPC and Subnets guide for more information.
The easiest way to ensure you have the required VPC setup would be to delete your existing VPC, if possible, and then use the Start VPC Wizard which will create a correctly configured VPC for you.
-
Go to Google Developer Console and create a new client ID for a web application
You can leave the URLs as they are and update them once the ELK stack has been created. Take note of the Client ID and Client Secret as you will need them in the next step.
-
Enable the "Google+ API" for your new client. This is the only Google API needed.
-
Launch the ELK stack using the AWS console or
aws
command-line tool and enter the required parameters. Note that some parameters, like providing a Route53 Hosted Zone Name to create a DNS alias for the public ELB, are optional. -
Once the ELK stack has launched revisit the Google developer console and update the URLs copying the output for
GoogleOAuthRedirectURL
toAUTHORIZED REDIRECT URI
and the same URL but without to path toAUTHORISED JAVASCRIPT ORIGINS
.
The following elasticsearch plugins are installed:
- AWS Cloud plugin - uses AWS API for the unicast discovery mechanism
- elasticsearch-head - web frontend for elasticsearch cluster
The "head" plugin web page is available at proxied (ie. authenticated) endpoints based on how the ELK stack is deployed:
- Head ->
http://<ELB>/__es/_plugin/head/
This ELK stack cloudformation template takes many parameters, explainations for each are shown when launching the stack. Note that Route 53 DNS, EBS volumes and S3 snapshots are optional.
Logstash grok patterns can be tested online at https://grokdebug.herokuapp.com/
The Kibana dashboards are configured via the GUI.
Guardian ELK Stack Cloudformation Templates and Logcabin Proxy
Copyright 2014-2016 Guardian News & Media
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.