- You have a Kubernetes environment
- You have a Prometheus server deployed in that Kubernetes environment (if you want one, see the caveat below)
Caveat: If you do not have a Prometheus Server, you can use the Prometheus Redis exporter that gets deployed with the sample application and it will get picked up by the Metricbeat autodiscover feature.
You can use Elastic Cloud ( http://cloud.elastic.co ), or a local deployment. Whichever you choose, https://elastic.co/start will get you started.
If this is your first experience with the Elastic Stack I would recommend Elastic Cloud; and don't worry, you do not need a credit card.
Make sure that you take note of the CLOUD ID and Elastic Password if you use Elastic Cloud or Elastic Cloud Enterprise.
Create a cluster level role binding so that you can manipulate the system level namespace (this is where DaemonSets go)
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin --user=<your email associated with the Cloud provider account>
Either clone the entire Elastic examples repo or use the wget commands in download.txt (wget is required, if you do not have it just clone the repo):
mkdir scraping-prometheus-k8s-with-metricbeat
cd scraping-prometheus-k8s-with-metricbeat
wget https://raw.githubusercontent.com/elastic/examples/master/scraping-prometheus-k8s-with-metricbeat/download.txt
sh download.txt
OR
git clone https://github.com/elastic/examples.git
cd examples/scraping-prometheus-k8s-with-metricbeat
Set these with the values from the http://cloud.elastic.co deployment
Note: Follow the instructions in the files carefully, the k8s secret creation does not remove trailing whitespace as secrets should be copied exactly as you provide them.
vi ELASTIC_PASSWORD
vi CLOUD_ID
and create a secret in the Kubernetes system level namespace
kubectl create secret generic dynamic-logging \
--from-file=./ELASTIC_PASSWORD --from-file=./CLOUD_ID \
--namespace=kube-system
kubectl create -f metricbeat-clusterrolebinding.yaml
kubectl get pods --namespace=kube-system | grep kube-state
and create it if needed (by default it will not be there)
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
kubectl create -f kube-state-metrics/kubernetes
kubectl get pods --namespace=kube-system | grep kube-state
Note: This is mostly the default Guestbook example from https://github.com/kubernetes/examples/blob/master/guestbook/all-in-one/guestbook-all-in-one.yaml
Changes:
- added annotations so that Prometheus and Metricbeat would autodiscover the Redis pods
- added an ingress that preserves source IPs
- added ConfigMaps for the Apache2 and Mod-Status configs to block the /server-status endpoint from outside the internal network
- added a redis.conf to set the slowlog time criteria
kubectl create -f guestbook.yaml
Let's look at a couple of things in guestbook.yaml:
- Annotations on the Redis pods. These will be used by both Prometheus and Metricbeat to autodiscover the Redis pods:
- Prometheus exporter for Redis sidecar:
kubectl get service frontend -w
Once the external IP address is assigned you can type CTRL-C to stop watching for changes and get the command prompt back (the -w is "watch for changes")
Normally deploying Metricbeat would be a single command, but the goal of this example is to show multiple ways of pulling metrics from Prometheus, so we will do things step by step.
In this example we will pull:
- self-monitoring metrics from the Prometheus server (using the /metrics endpoint)
- all of the metrics that Prometheus collects from the various systems being monitored (using the /federate endpoint)
- kube-state-metrics information including events and state of nodes, deployments, etc.
kubectl create -f metricbeat-kube-state-and-prometheus-server.yaml
Here is the YAML to connect Metricbeat to the Prometheus server /metrics endpoint (self-monitoring):
Here is the YAML to connect Metricbeat to the Prometheus server /federate endpoint:
We will look specifically at the kubernetes event metricset when we build a visualization. The event metricset exposes information about scaling deployments (among other things) and the reason for the scaling.
While that deploys, look at the snippet below. You can see that Metricbeat will connect to port 8080 on the kube-state-metrics pod and collect events and state information about nodes, deployments, etc.
Up above is a screenshot of the YAML to deploy a sidecar to export Redis metrics to Preometheus. Metricbeat can pull metrics from Prometheus exporters also. Deploy a Metricbeat DaemonSet to autodiscover and collect these metrics.
Note: Normally the Metricbeat DaeomonSet would autodiscover and collect all of the metrics about the k8s environment and the apps running in there, this config is simplified to show just one example.
kubectl create -f metricbeat-prometheus-auto-discover.yaml
Let's look at how autodiscover is configured. Earlier we looked at the guestbook.yaml and that annotations were added to the Redis pods. One of those annotations set prometheus.io/scrape to true, and the other set the port for the Redis metrics to 9121. In the Metricbeat DaemonSet config we are configuring autodiscover to look for pods with the scrape and port annotations, which is exactly what Prometheus does.
This autodiscover config is more abstract by not specifying port 9121, and substituting the value from the annotation provided by the k8s API so that a single autodiscover config could discover all exporters whether they are for Redis or another technology (the port numbers for exporters are based on the technology and are published in the Prometheus wiki.
If you are not familiar with the Prometheus autodiscover configuration, here is part of an example. Notice that it uses the same annotations:
Please either see the video from the blog "Elasticsearch Observability: Embracing Prometheus and OpenMetrics standards for metrics", or follow along below for step by step instructions to build a visualization with the Redis metrics collected through Prometheus and the kube-state-metrics collected directly by Metricbeat. Substitute your own metrics if you did not deploy the Guestbook app.
Open Kibana
-
Open discover
-
Start typing the name of the metric
instantaneous_ops
-
When Kibana offers you the list, choose
prometheus.metrics.redis_instantaneous_ops_per_sec
-
Add columns to the Discover view for the metric name and the pod name. I always do this when I am going to create a visualization so that I have all of the Elasticsearch fields that I will use in the visualization handy.
-
This is what the view will look like, the full records are available by expanding them with the
>
, and the columns make it easier to scan through the data visually. Copy the name of the metric (prometheus.metrics.redis_instantaneous_ops_per_sec
) to paste into the visualization builder. -
Set the Aggregation to the Average of prometheus.metrics.redis_instantaneous_ops_per_sec (paste it in, or type instantaneous and select the metric)
-
The above shows the average of the metric across all Redis pods, to show the individual pods group by
Terms
kubernetes.pod.name
-
Name the time series, and if you want to change the number format you can type a format in. There is a link to the format string details just under the format box. If there are pods in the list that you do not want, you can filter using the k8s metadata, in the screenshot we are filtering by k8s label
app
-
At this point the time series is done, but let's kick it up a notch. Why not add some event data as an annotation? This could be a specific log message that might be a clue to a performance change. In this example we will use a message that tells us the Redis deployment has scaled. You might choose to use a crash loop backoff, or a log message that indicates a config change. Scale a deployment to make sure you have some of the relevant events (
kubectl scale --replicas=1 deployment/redis-slave
) -
Switch browser tabs to the Visual Builder and click on the Annotations tab:
-
Open Discover in a new tab, and filter on kubernetes.event.reason:
-
Open a record and add the fields
kubernetes.event.message
,kubernetes.event.reason
, andkubernetes.event.involved_object_name
to the tabular view. -
If you do not have any
kubernetes.event.reason
ScalingReplicaSet
records you can scale theredis-slave
deployment (kubectl scale --replicas=1 deployment/redis-slave
) and then click on refresh in Discover to see them. -
Switch browser tabs to the Visual Builder and click on the Annotations tab:
-
Add a data source for the annotation. This can be any index pattern in your system. In the example we will be using events from k8s, and we grab those form kube-state-metrics via Metricbeat.
-
Set the annotation up as shown. All of the details come from the Discover window which is open in another tab. Numbers 3 - 6 below deserve a little detail: