Manifests here require Kubernetes 1.8 now. On earlier versions use v2.1.0.
Transparent Kafka setup that you can grow with. Good for both experiments and production.
How to use:
- Good to know: you'll likely want to fork this repo. It prioritizes clarity over configurability, using plain manifests and .propeties files; no client side logic.
- Run a Kubernetes cluster, minikube or real.
- Quickstart: use the
kubectl apply
s below. - Have a look at addons, or the official forks:
- kubernetes-kafka-small for single-node clusters like Minikube.
- StreamingMicroservicesPlatform Like Confluent's platform quickstart but for Kubernetes.
- Join the discussion in issues and PRs.
No readable readme can properly introduce both Kafka and Kubernetes, but we think the combination of the two is a great backbone for microservices. Back when we read Newman we were beginners with both. Now we've read Kleppmann, Confluent and SRE and enjoy this "Streaming Platform" lock-in 😄.
We also think the plain-yaml approach of this project is easier to understand and evolve than helm charts.
Keep an eye on kubectl --namespace kafka get pods -w
.
The goal is to provide Bootstrap servers: kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092,kafka-2.broker.kafka.svc.cluster.local:9092
`
Zookeeper at zookeeper.kafka.svc.cluster.local:2181
.
For Minikube run kubectl apply -f configure/minikube-storageclass-broker.yml; kubectl apply -f configure/minikube-storageclass-zookeeper.yml
.
There's a similar setup for GKE, configure/gke-*
. You might want to tweak it before creating.
The Kafka book recommends that Kafka has its own Zookeeper cluster with at least 5 instances.
kubectl apply -f ./zookeeper/
To support automatic migration in the face of availability zone unavailability we mix persistent and ephemeral storage.
kubectl apply -f ./kafka/
You might want to verify in logs that Kafka found its own DNS name(s) correctly. Look for records like:
kubectl -n kafka logs kafka-0 | grep "Registered broker"
# INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(kafka-0.broker.kafka.svc.cluster.local,9092,PLAINTEXT)
That's it. Just add business value 😉.
For clusters that enfoce RBAC there's a minimal set of policies in
kubectl apply -f rbac-namespace-default/
For example rack awareness can fail without this, logs -c init-config
showing Error from server (Forbidden): pods "kafka-0" is forbidden: User "system:serviceaccount:kafka:default" cannot get pods in the namespace "kafka": Unknown user "system:serviceaccount:kafka:default"
.
Tests are based on the kube-test concept.
Like the rest of this repo they have kubectl
as the only local dependency.
Run self-tests or not. They do generate some load, but indicate if the platform is working or not.
- To include tests, replace
apply -f
withapply -R -f
in yourkubectl
s above. - Anything that isn't READY in
kubectl get pods -l test-type=readiness --namespace=test-kafka
is a failed test.