Skip to content

Commit

Permalink
Merge branch 'master' into reduce-config-option
Browse files Browse the repository at this point in the history
  • Loading branch information
daixiang0 authored May 31, 2019
2 parents d338732 + f7f09e2 commit 6060162
Show file tree
Hide file tree
Showing 14 changed files with 161 additions and 80 deletions.
17 changes: 17 additions & 0 deletions cmd/loki/loki-local-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,23 @@ storage_config:

limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h

chunk_store_config:
max_look_back_period: 0

table_manager:
chunk_tables_provisioning:
inactive_read_throughput: 0
inactive_write_throughput: 0
provisioned_read_throughput: 0
provisioned_write_throughput: 0
index_tables_provisioning:
inactive_read_throughput: 0
inactive_write_throughput: 0
provisioned_read_throughput: 0
provisioned_write_throughput: 0
retention_deletes_enabled: false
retention_period: 0

35 changes: 33 additions & 2 deletions docs/operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,11 +59,14 @@ For more information about mixins, take a look at the [mixins project docs](http

## Retention/Deleting old data

A retention policy and API to delete ingested logs is still under development.
Retention in Loki can be done by configuring Table Manager. You need to set a retention period and enable deletes for retention using yaml config as seen [here](https://github.com/grafana/loki/blob/39bbd733be4a0d430986d9513476a91334485e9f/production/ksonnet/loki/config.libsonnet#L128-L129) or using `table-manager.retention-period` and `table-manager.retention-deletes-enabled` command line args. Retention period needs to be a duration in string format that can be parsed using [time.Duration](https://golang.org/pkg/time/#ParseDuration).

In the case of chunks retention when using S3 or GCS, you need to set the expiry policy on the bucket that is configured for storing chunks. For more details check [this](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html) for S3 and [this](https://cloud.google.com/storage/docs/managing-lifecycles) for GCS.

Currently we only support global retention policy. A per user retention policy and API to delete ingested logs is still under development.
Feel free to add your use case to this [GitHub issue](https://github.com/grafana/loki/issues/162).

A design goal of Loki is that storing logs should be cheap, hence a volume-based deletion API was deprioritized.
But we realize that time-based retention could be a compliance issue.

Until this feature is released: If you suddenly must delete ingested logs, you can delete old chunks in your object store.
Note that this will only delete the log content while keeping the label index intact.
Expand Down Expand Up @@ -98,6 +101,33 @@ The chunks are stored under `/tmp/loki/chunks`.
Loki has support for Google Cloud storage.
Take a look at our [production setup](https://github.com/grafana/loki/blob/a422f394bb4660c98f7d692e16c3cc28747b7abd/production/ksonnet/loki/config.libsonnet#L55) for the relevant configuration fields.

### Cassandra

Loki can use Cassandra for the index storage. Please pull the **latest** Loki docker image or build from **latest** source code. Example config for using Cassandra:

```yaml
schema_config:
configs:
- from: 2018-04-15
store: cassandra
object_store: filesystem
schema: v9
index:
prefix: cassandra_table
period: 168h

storage_config:
cassandra:
username: cassandra
password: cassandra
addresses: 127.0.0.1
auth: true
keyspace: lokiindex

filesystem:
directory: /tmp/loki/chunks
```
### AWS S3 & DynamoDB
Example config for using S3 & DynamoDB:
Expand Down Expand Up @@ -158,3 +188,4 @@ create the table manually you cannot easily erase old data and your index just g
If you set your DynamoDB table manually, ensure you set the primary index key to `h`
(string) and use `r` (binary) as the sort key. Also set the "period" attribute in the yaml to zero.
Make sure adjust your throughput base on your usage.

2 changes: 1 addition & 1 deletion docs/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,5 +83,5 @@ We support (jaeger)[https://www.jaegertracing.io/] to trace loki, just add env `
If you deploy with helm, refer to following command:

```bash
$ helm upgrade --install loki loki/loki --set "loki.jaegerAgentHost=YOUR_JAEGER_AGENT_HOST"
$ helm upgrade --install loki loki/loki --set "loki.tracing.jaegerAgentHost=YOUR_JAEGER_AGENT_HOST"
```
18 changes: 18 additions & 0 deletions pkg/distributor/distributor.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"flag"
"hash/fnv"
"sync/atomic"
"time"

cortex_client "github.com/cortexproject/cortex/pkg/ingester/client"
"github.com/cortexproject/cortex/pkg/ring"
Expand All @@ -21,6 +22,8 @@ import (
"github.com/grafana/loki/pkg/util"
)

const metricName = "logs"

var (
ingesterAppends = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "loki",
Expand Down Expand Up @@ -130,6 +133,21 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
continue
}

entries := make([]logproto.Entry, 0, len(stream.Entries))
for _, entry := range stream.Entries {
if err := d.overrides.ValidateSample(userID, metricName, cortex_client.Sample{
TimestampMs: entry.Timestamp.UnixNano() / int64(time.Millisecond),
}); err != nil {
validationErr = err
continue
}
entries = append(entries, entry)
}

if len(entries) == 0 {
continue
}
stream.Entries = entries
keys = append(keys, tokenFor(userID, stream.Labels))
streams = append(streams, streamTracker{
stream: stream,
Expand Down
13 changes: 5 additions & 8 deletions production/helm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,20 +105,17 @@ tls:

## How to contribute

If you want to add any feature to helm chart, you can follow as below:
After adding your new feature to the appropriate chart, you can build and deploy it locally to test:

```bash
$ # do some changes to loki/promtail in the corresponding directory
$ make helm
$ helm upgrade --install loki ./loki-stack-*.tgz
```

After verify changes, need to bump chart version.
For example, if you update the loki chart, you need to bump the version as following:
After verifying your changes, you need to bump the chart version following [semantic versioning](https://semver.org) rules.
For example, if you update the loki chart, you need to bump the versions as follows:

```bash
$ # update version loki/Chart.yaml
$ # update version loki-stack/Chart.yaml
```
- Update version loki/Chart.yaml
- Update version loki-stack/Chart.yaml

You can use the `make helm-debug` to test and print out all chart templates. If you want to install helm (tiller) in your cluster use `make helm-install`, to install the current build in your Kubernetes cluster run `make helm-upgrade`.
2 changes: 1 addition & 1 deletion production/helm/loki-stack/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: loki-stack
version: 0.9.5
version: 0.10.2
appVersion: 0.0.1
kubeVersion: "^1.10.0-0"
description: "Loki: like Prometheus, but for logs."
Expand Down
2 changes: 1 addition & 1 deletion production/helm/loki/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: loki
version: 0.8.3
version: 0.9.1
appVersion: 0.0.1
kubeVersion: "^1.10.0-0"
description: "Loki: like Prometheus, but for logs."
Expand Down
22 changes: 0 additions & 22 deletions production/helm/loki/templates/pvc.yaml

This file was deleted.

19 changes: 19 additions & 0 deletions production/helm/loki/templates/service-headless.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "loki.fullname" . }}-headless
labels:
app: {{ template "loki.name" . }}
chart: {{ template "loki.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
clusterIP: None
ports:
- port: {{ .Values.service.port }}
protocol: TCP
name: http-metrics
targetPort: http-metrics
selector:
app: {{ template "loki.name" . }}
release: {{ .Release.Name }}
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
apiVersion: apps/v1
kind: Deployment
kind: StatefulSet
metadata:
name: {{ template "loki.fullname" . }}
labels:
Expand All @@ -10,17 +10,15 @@ metadata:
annotations:
{{- toYaml .Values.annotations | nindent 4 }}
spec:
podManagementPolicy: {{ .Values.podManagementPolicy }}
replicas: {{ .Values.replicas }}
minReadySeconds: {{ .Values.minReadySeconds }}
selector:
matchLabels:
app: {{ template "loki.name" . }}
release: {{ .Release.Name }}
strategy:
type: {{ .Values.deploymentStrategy }}
{{- if ne .Values.deploymentStrategy "RollingUpdate" }}
rollingUpdate: null
{{- end }}
serviceName: {{ template "loki.fullname" . }}-headless
updateStrategy:
{{- toYaml .Values.updateStrategy | nindent 4 }}
template:
metadata:
labels:
Expand All @@ -29,7 +27,7 @@ spec:
release: {{ .Release.Name }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
Expand All @@ -50,7 +48,7 @@ spec:
- "-config.file=/etc/loki/loki.yaml"
{{- range $key, $value := .Values.extraArgs }}
- "-{{ $key }}={{ $value }}"
{{- end }}
{{- end }}
volumeMounts:
- name: config
mountPath: /etc/loki
Expand All @@ -70,8 +68,10 @@ spec:
securityContext:
readOnlyRootFilesystem: true
env:
{{- if .Values.tracing.jaegerAgentHost }}
- name: JAEGER_AGENT_HOST
value: "{{ .Values.tracing.jaegerAgentHost }}"
{{- end }}
nodeSelector:
{{- toYaml .Values.nodeSelector | nindent 8 }}
affinity:
Expand All @@ -83,11 +83,25 @@ spec:
- name: config
secret:
secretName: {{ template "loki.fullname" . }}
{{- if not .Values.persistence.enabled }}
- name: storage
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default (include "loki.fullname" .) }}
{{- else }}
emptyDir: {}
{{- end }}
{{- else if .Values.persistence.existingClaim }}
- name: storage
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
{{- else }}
volumeClaimTemplates:
- metadata:
name: storage
annotations:
{{- toYaml .Values.persistence.annotations | nindent 8 }}
spec:
accessModes:
{{- toYaml .Values.persistence.accessModes | nindent 8 }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
storageClassName: {{ .Values.persistence.storageClassName }}
{{- end }}

17 changes: 11 additions & 6 deletions production/helm/loki/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ affinity: {}
# - loki
# topologyKey: "kubernetes.io/hostname"

## Deployment annotations
## StatefulSet annotations
annotations: {}

# enable tracing for debug, need install jaeger and specify right jaeger_agent_host
Expand All @@ -20,7 +20,6 @@ tracing:

config:
auth_enabled: false

ingester:
chunk_idle_period: 15m
chunk_block_size: 262144
Expand All @@ -41,6 +40,8 @@ config:
# consistentreads: true
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
schema_config:
configs:
- from: 2018-04-15
Expand All @@ -59,8 +60,9 @@ config:
directory: /data/loki/chunks
chunk_store_config:
max_look_back_period: 0

deploymentStrategy: RollingUpdate
table_manager:
retention_deletes_enabled: false
retention_period: 0

image:
repository: grafana/loki
Expand All @@ -77,8 +79,6 @@ livenessProbe:
port: http-metrics
initialDelaySeconds: 45

minReadySeconds: 0

## Enable persistence using Persistent Volume Claims
networkPolicy:
enabled: false
Expand Down Expand Up @@ -108,6 +108,8 @@ podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "http-metrics"

podManagementPolicy: OrderedReady

## Assign a PriorityClassName to pods if set
# priorityClassName:

Expand Down Expand Up @@ -159,3 +161,6 @@ tolerations: []
podDisruptionBudget: {}
# minAvailable: 1
# maxUnavailable: 1

updateStrategy:
type: RollingUpdate
Loading

0 comments on commit 6060162

Please sign in to comment.