Skip to content

Commit

Permalink
Uses Kibana API to directly import dashboard
Browse files Browse the repository at this point in the history
Using the Kibana APIs, we can directly import a dashboard and all
dependent objects including index patterns and visualizations. This
saves the user from a bunch of steps and manually copying around the
UUID of the index pattern.
  • Loading branch information
joshdevins committed Jun 23, 2020
1 parent 885d06b commit d3699c2
Show file tree
Hide file tree
Showing 8 changed files with 765 additions and 63 deletions.
22 changes: 6 additions & 16 deletions Machine Learning/Online Search Relevance Metrics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,6 @@ For a high-level introduction, please see the accompanying blog post, [Exploring

![Kibana dashboard](https://user-images.githubusercontent.com/181622/85378369-c40ae380-b53a-11ea-9d0c-5a97d1c00d24.png)

**TODO**:

- Fix link to blog post once it's available
- Kibana import scripts to support Kibana API

## Contents

- [Online Search Relevance Metrics](#online-search-relevance-metrics)
Expand Down Expand Up @@ -84,23 +79,18 @@ Use the `-h` or `--help` arguments to explore more functionality and arguments.

### Kibana visualizations

**WIP**: This section is being renewed to support a single command to do all of this behind the scenes.

To recreate the visualisations in Kibana, you need to first make sure you have data in your Elasticsearch instance using the above `simulate` command.
To recreate the visualisations in Kibana, you need to first make sure you have data in your Elasticsearch instance using the above `simulate` command. Then you just need to run `kibana` to create the dashboard, index pattern and visualizations.

Once you have data in Kibana, [create an index pattern](https://www.elastic.co/guide/en/kibana/current/index-patterns.html) with the same name as the metrics index: `ecs-search-metrics_transform_queryid`. When creating the index pattern, use the `query_event.@timestamp` field as the timestamp field of the index pattern. Once the index pattern has been created, click on the link to the index pattern and find the index pattern ID in the URL of the page. It'll be the long UUID almost at the end of the URL, that looks something like this: `d84e0c50-8aec-11ea-aa75-e59eded2bd43`.
```bash
bin/kibana
```

With the Kibana saved objects template as input, a location for the saved object output file, and the index pattern ID, you can use the `bin/kibana` script to generate a valid set of Kibana visualizations linked to the correct index pattern. Here's an example invocation:
As with `simulate`, if you are running on Cloud, ensure that your credentials are set and the correct Kibana URL is used (don't use the Elasticsearch URL!). You can find your Kibana endpoint just below where you found the Elasticsearch endpoint, and your credentials should be the same as with Elasticsearch.

```bash
bin/kibana \
--input config/kibana/saved_objects.template.ndjson \
--output tmp/kibana.ndjson \
d84e0c50-8aec-11ea-aa75-e59eded2bd43
bin/kibana --url https://elastic:[email protected]:9243
```

Open up Kibana again and select the "Saved Objects" page from "Stack Management", and Import (top right corner). Drag the newly created `kibana.ndjson` file with the saved objects in it and drop it into the "Import saved objects" dialog, and hit "Import".

You're all set! Have a look at the Dashboard and Visualizations pages now and you should see a large set of ready-made visualizations. Make sure that your time range is set to the entire day of 15 Nov 2019 UTC.

## Implementation details
Expand Down
2 changes: 1 addition & 1 deletion Machine Learning/Online Search Relevance Metrics/bin/index
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ from elasticsearch import Elasticsearch, helpers

# project library
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from metrics.resources import *
from metrics.resources import INDEX, Timer, load_json

DEFAULT_CHUNK_SIZE = 10000
DEFAULT_THREAD_COUNT = 4
Expand Down
33 changes: 21 additions & 12 deletions Machine Learning/Online Search Relevance Metrics/bin/kibana
Original file line number Diff line number Diff line change
@@ -1,28 +1,37 @@
#!venv/bin/python

"""
Creates a Kibana saved objects file from a template. This requires the user to
manually create an index pattern and fill in the saved object ID for the index
pattern. We expect the index and index pattern to be named
"ecs-search-metrics_transform_queryid".
Sets up a Kibana index pattern for metrics and imports a pre-generated dashboard
and dependent visualizations.
"""

import argparse
import os
import requests
import sys

from string import Template
# project library
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from metrics.resources import load_config

DEFAULT_URL = 'http://localhost:5600'


def main():
parser = argparse.ArgumentParser(prog='kibana')
parser.add_argument('--input', required=True, help="the Kibana saved object input template file")
parser.add_argument('--output', required=True, help="the Kibana saved object output file")
parser.add_argument('id', help="ID of the metrics index pattern")
parser.add_argument('--url', default=DEFAULT_URL,
help="A Kibana connection URL, e.g. http://user:secret@localhost:5600")
args = parser.parse_args()

with open(args.input, 'r') as fin:
with open(args.output, 'w') as fout:
for line in fin:
fout.write(Template(line).substitute(index_pattern_id=args.id))
with requests.Session() as s:
payload = load_config('kibana', 'dashboard')
s.headers['kbn-xsrf'] = 'true'
r = s.post(f'{args.url}/api/kibana/dashboards/import', json=payload)
if r.status_code == 200:
print(f"Done. Go to Kibana and load the dashboard 'Search Metrics'.")
else:
print(f"Got {r.status_code} instead of a 200 response from Kibana. Here's the response:")
print(r.text)


if __name__ == "__main__":
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ from elasticsearch import Elasticsearch

# project library
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from metrics.resources import *
from metrics.resources import prepare

DEFAULT_URL = 'http://localhost:9200'

Expand Down
15 changes: 10 additions & 5 deletions Machine Learning/Online Search Relevance Metrics/bin/simulate
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ TODO: Interleave "business goal" events with click events. They need to be inter
"""

import argparse
import json
import os
import sys

Expand All @@ -17,7 +18,7 @@ from elasticsearch import Elasticsearch
# project library
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from metrics import simulate
from metrics.resources import *
from metrics.resources import INDEX, TRANSFORM_NAMES, prepare, start_transforms

DEFAULT_NUM_DOCS = 10000
DEFAULT_NUM_USERS = 100
Expand Down Expand Up @@ -62,17 +63,21 @@ def command_elasticsearch(args):
def main():
parser = argparse.ArgumentParser(prog='simulate')

parser.add_argument('--num-documents', type=int, default=DEFAULT_NUM_DOCS, help="the number of documents in the corpus")
parser.add_argument('--num-users', type=int, default=DEFAULT_NUM_USERS, help="the number of users to generate queries for")
parser.add_argument('--max-queries', type=int, default=DEFAULT_MAX_QUERIES, help="the maximum number of queries per user")
parser.add_argument('--num-documents', type=int, default=DEFAULT_NUM_DOCS,
help="the number of documents in the corpus")
parser.add_argument('--num-users', type=int, default=DEFAULT_NUM_USERS,
help="the number of users to generate queries for")
parser.add_argument('--max-queries', type=int, default=DEFAULT_MAX_QUERIES,
help="the maximum number of queries per user")

subparsers = parser.add_subparsers()

stdout_subparser = subparsers.add_parser('stdout', help="write events to stdout")
stdout_subparser.set_defaults(func=command_stdout)

es_subparser = subparsers.add_parser('elasticsearch', help="write events to an Elasticsearch instance")
es_subparser.add_argument('--url', default=DEFAULT_URL, help="an Elasticsearch connection URL, e.g. http://user:secret@localhost:9200")
es_subparser.add_argument('--url', default=DEFAULT_URL,
help="an Elasticsearch connection URL, e.g. http://user:secret@localhost:9200")
es_subparser.set_defaults(func=command_elasticsearch)

args = parser.parse_args()
Expand Down
Loading

0 comments on commit d3699c2

Please sign in to comment.