Skip to content

Commit

Permalink
Prepare for release 0.59.0 (signalfx#536)
Browse files Browse the repository at this point in the history
* Prepare for release 0.59.0

* fix appVersion

* prefer using appVersion
  • Loading branch information
atoulme authored Sep 19, 2022
1 parent e1767ec commit f008069
Show file tree
Hide file tree
Showing 60 changed files with 224 additions and 201 deletions.
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,19 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

## Unreleased

## [0.59.0] - 2022-09-17

### Added

- A way to provide a custom image for init container patching host log directories (#534, #535)

### Changed

- Upgrade splunk-otel-collector image to 0.59.1 (#536)
- [BREAKING CHANGE] Datatype of `filelog.force_flush_period` and `filelog.poll_interval` were
changed back from map to string due to upstream changes.
See [upgrade guidelines](https://github.com/signalfx/splunk-otel-collector-chart/blob/main/UPGRADING.md#0580-0590)

## [0.58.0] - 2022-08-24

### Changed
Expand Down
18 changes: 18 additions & 0 deletions UPGRADING.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,23 @@
# Upgrade guidelines

## 0.58.0 to 0.59.0
[receiver/filelogreceiver] Datatype for `force_flush_period` and `poll_interval` were changed from map to string.

If you are using custom filelog receiver plugin, you need to change the config from:
```yaml
filelog:
poll_interval:
duration: 200ms
force_flush_period:
duration: "0"
```
to:
```yaml
filelog:
poll_interval: 200ms
force_flush_period: "0"
```
## 0.57.1 to 0.58.0
[receiver/filelogreceiver] Datatype for `force_flush_period` and `poll_interval` were changed from
sring to map. Because of that, the default values in Helm Chart were causing problems [#519](https://github.com/signalfx/splunk-otel-collector-chart/issues/519)
Expand Down
4 changes: 2 additions & 2 deletions helm-charts/splunk-otel-collector/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: v2
name: splunk-otel-collector
version: 0.58.0
appVersion: 0.58.0
version: 0.59.0
appVersion: 0.59.1
description: Splunk OpenTelemetry Collector for Kubernetes
icon: https://github.com/signalfx/splunk-otel-collector-chart/tree/main/splunk.png
type: application
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -247,16 +247,14 @@ receivers:
start_at: beginning
include_file_path: true
include_file_name: false
poll_interval:
duration: 200ms
poll_interval: 200ms
max_concurrent_files: 1024
encoding: utf-8
fingerprint_size: 1kb
max_log_size: 1MiB
# Disable force flush until this issue is fixed:
# https://github.com/open-telemetry/opentelemetry-log-collection/issues/292
force_flush_period:
duration: "0"
force_flush_period: "0"
operators:
{{- if not .Values.logsCollection.containers.containerRuntime }}
- type: router
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/agent-only/clusterRole.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
rules:
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/agent-only/clusterRoleBinding.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
roleRef:
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/agent-only/configmap-agent.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector-otel-agent
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
data:
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/agent-only/configmap-cluster-receiver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector-otel-k8s-cluster-receiver
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
data:
Expand Down
10 changes: 5 additions & 5 deletions rendered/manifests/agent-only/daemonset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector-agent
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
spec:
Expand All @@ -29,7 +29,7 @@ spec:
app: splunk-otel-collector
release: default
annotations:
checksum/config: 1f5a2d3754ba7f6d1ae0d3a7bfe6b1d8073d680194e38f7e81aa1f63d3557117
checksum/config: 6127c8202ccd5423b9659494eb0e2f6c8c7997e325a534740db946b69b33a1ab
kubectl.kubernetes.io/default-container: otel-collector
spec:
hostNetwork: true
Expand Down Expand Up @@ -77,7 +77,7 @@ spec:
containerPort: 9411
hostPort: 9411
protocol: TCP
image: quay.io/signalfx/splunk-otel-collector:0.58.0
image: quay.io/signalfx/splunk-otel-collector:0.59.1
imagePullPolicy: IfNotPresent
env:
- name: SPLUNK_MEMORY_TOTAL_MIB
Expand Down
10 changes: 5 additions & 5 deletions rendered/manifests/agent-only/deployment-cluster-receiver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ metadata:
name: default-splunk-otel-collector-k8s-cluster-receiver
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
component: otel-k8s-cluster-receiver
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
app.kubernetes.io/component: otel-k8s-cluster-receiver
Expand All @@ -30,7 +30,7 @@ spec:
component: otel-k8s-cluster-receiver
release: default
annotations:
checksum/config: 47e9ac19bd4554bd48dadf7d1912c62d347e1dd15b1fe601d6568acd51c9c931
checksum/config: 685d7edf7b6621d94561f94fc6913e9aec36b31a924dd51e97f87958c30be3bc
spec:
serviceAccountName: default-splunk-otel-collector
nodeSelector:
Expand All @@ -40,7 +40,7 @@ spec:
command:
- /otelcol
- --config=/conf/relay.yaml
image: quay.io/signalfx/splunk-otel-collector:0.58.0
image: quay.io/signalfx/splunk-otel-collector:0.59.1
imagePullPolicy: IfNotPresent
env:
- name: SPLUNK_MEMORY_TOTAL_MIB
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/agent-only/secret-splunk.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
type: Opaque
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/agent-only/serviceAccount.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@ metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
6 changes: 3 additions & 3 deletions rendered/manifests/eks-fargate/clusterRole.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
rules:
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/eks-fargate/clusterRoleBinding.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
roleRef:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector-cr-node-discoverer-script
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
data:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector-otel-k8s-cluster-receiver
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
data:
Expand Down
6 changes: 3 additions & 3 deletions rendered/manifests/eks-fargate/configmap-gateway.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@ metadata:
name: default-splunk-otel-collector-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
data:
Expand Down
10 changes: 5 additions & 5 deletions rendered/manifests/eks-fargate/deployment-cluster-receiver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ metadata:
name: default-splunk-otel-collector-k8s-cluster-receiver
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
component: otel-k8s-cluster-receiver
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
app.kubernetes.io/component: otel-k8s-cluster-receiver
Expand All @@ -32,7 +32,7 @@ spec:
component: otel-k8s-cluster-receiver
release: default
annotations:
checksum/config: 83098903b5f1bdc68fea72b64cf54857242591653ae4fb3934c2bf4b3449fb48
checksum/config: cc187e80eac774a4a97ba13d88ed1f194396e659c24ce1af1699b29142af1e08
spec:
serviceAccountName: default-splunk-otel-collector
nodeSelector:
Expand Down Expand Up @@ -73,7 +73,7 @@ spec:
command:
- /otelcol
- --config=/splunk-messages/config.yaml
image: quay.io/signalfx/splunk-otel-collector:0.58.0
image: quay.io/signalfx/splunk-otel-collector:0.59.1
imagePullPolicy: IfNotPresent
env:
- name: SPLUNK_MEMORY_TOTAL_MIB
Expand Down
10 changes: 5 additions & 5 deletions rendered/manifests/eks-fargate/deployment-gateway.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.58.0
helm.sh/chart: splunk-otel-collector-0.59.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.58.0"
app.kubernetes.io/version: "0.59.1"
app: splunk-otel-collector
component: otel-collector
chart: splunk-otel-collector-0.58.0
chart: splunk-otel-collector-0.59.0
release: default
heritage: Helm
app.kubernetes.io/component: otel-collector
Expand All @@ -30,7 +30,7 @@ spec:
component: otel-collector
release: default
annotations:
checksum/config: 76f55e5257baf3c16fd519e4436ea2f42fab4391343258527b7e6fd1e1a335e6
checksum/config: 9c15ff266c83f17f5f5f50f5ef12879c5bb47d1fb46f91c0d217a973029f8682
spec:
serviceAccountName: default-splunk-otel-collector
nodeSelector:
Expand All @@ -40,7 +40,7 @@ spec:
command:
- /otelcol
- --config=/conf/relay.yaml
image: quay.io/signalfx/splunk-otel-collector:0.58.0
image: quay.io/signalfx/splunk-otel-collector:0.59.1
imagePullPolicy: IfNotPresent
env:
- name: SPLUNK_MEMORY_TOTAL_MIB
Expand Down
Loading

0 comments on commit f008069

Please sign in to comment.