This repository contains the following parts:
- Docker files and scripts needed to create docker containers with kerberos components in them (and some other stuff) - there are 2 containers created: a basic Kubernetes-supporting container, and the sidecar needed for client containers
- Kerberos server Helm chart - which deploys a pod with two containers:
kdc
andkadmin
- This is based on the work in https://github.com/jeffgrunewald/kuberos
- Kerberos client Helm chart - this deploys a pod with two containers:
- A basic container that does nothing (but has the Kerberos needed configurations in it)
- A sidecar that is responsible for running
kinit
periodically, and obtaining KerberosTGT
for initial access. It stores the tickets in a memory-mapped volume that is accessible by the 1st container, thus allowing it to transparently authenticate with Kerberos. - This is based on the work in https://github.com/edseymour/kinit-sidecar
This section assumes that you're deploying on an Iguazio system, but it should work the same for any k8s cluster that you may have lying around. The main point about deploying on Iguazio is that the Docker container needs to be created on the app cluster
, but the Helm charts are installed from the data cluster
. Other than that, there should be no issue.
The steps are as follows (assuming you are starting from the root directory, right here):
-
Create the server Docker image. Run the following command on the
app cluster
cd docker/server docker build -f Dockerfile -t kuberos:latest ./
(You can use a different container tag, but then you need to modify the information in the
values.yaml
file accordingly) -
Go the the
kuberos
Helm chart directory, and examine the values invalues.yaml
. By default you don't need to change anything. Some things you may want to change:- If you tagged the container differently, then you need to change the
containers.image
andcontainers.tag
values to the correct tag you selected - If you're feeling brave, you can have the Kerberos DB stored in a PVC, to do that you need to modify the values in
kdc.persistence
. I haven't really checked it yet
- If you tagged the container differently, then you need to change the
-
Once you're satisfied, deploy the Helm chart:
cd kuberos helm -n default-tenant install kuberos ./
Assuming this worked, you should get a nice message explaining a lot of stuff. At this point you can check that the pods were created:
$ kubectl -n default-tenant get pods NAME READY STATUS RESTARTS AGE kuberos-kuberos-kdc-0 2/2 Running 0 52s
(Of course, other containers will be there...)
-
Login to the
kadmin
container to configure stuff:kubectl -n default-tenant exec -ti kuberos-kuberos-kdc-0 --container kadmin -- /bin/bash
Now, run
kadmin.local
and perform the following commands (it's an interactive shell):[root@kuberos-kuberos-kdc-0 bin]# kadmin.local Authenticating as principal root/[email protected] with password. addprinc -pw testpasswd iguazio WARNING: no policy specified for [email protected]; defaulting to no policy Principal "[email protected]" created. kadmin.local: kadmin.local: addprinc -pw testpasswd krbtest WARNING: no policy specified for [email protected]; defaulting to no policy Principal "[email protected]" created. kadmin.local: kadmin.local: addprinc -randkey host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local WARNING: no policy specified for host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET; defaulting to no policy Principal "host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET" created. kadmin.local: ktadd iguazio host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local Entry for principal iguazio with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal iguazio with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. kadmin.local: kadmin.local: exit
List Principals
kadmin.local: list_principals K/[email protected] host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET [email protected] kadmin/[email protected] kadmin/[email protected] kadmin/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET kiprop/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET [email protected] krbtgt/[email protected] kadmin.local: exit
What these commands do is create 3 principals - 2 users (
iguazio
andkrbtest
), and a single host which corresponds to the server pod (assuming you didn't change the pod's name by messing around with the Helm parameters). Then it usesktadd
to save passwords for theiguazio
user and the server host to the/etc/krb5.keytab
file - this file will be used to authenticate without password later -
Create another keytab called
/etc/hdfs.keytab
- for now you can just copy thekrb5.keytab
file - it's needed because thegenerate_keytab_secret
expects this to also exist (it is used later for HDFS configuration) -
Save the
krb5.keytab
file to a k8s secret (will be consumed by the client pod):source generate_keytab_secret
This will save the keytab to a local file and generate a k8s secret from it, called
secret/krb5-keytab
Test Persistence
kubectl get statefulset -n kerberos NAME READY AGE kuberos-kuberos-kdc 1/1 22m
kubectl get pods -n kerberos NAME READY STATUS RESTARTS AGE kuberos-kuberos-kdc-0 2/2 Running 0 2m5s kubectl delete pod kuberos-kuberos-kdc-0 -n kerberos pod "kuberos-kuberos-kdc-0" deleted kubectl get pods -n kerberos NAME READY STATUS RESTARTS AGE kuberos-kuberos-kdc-0 0/2 PodInitializing 0 12s kubectl get pods -n kerberos NAME READY STATUS RESTARTS AGE kuberos-kuberos-kdc-0 2/2 Running 0 24s
kubectl -n kerberos exec -ti kuberos-kuberos-kdc-0 --container kadmin -- /bin/bash [root@kuberos-kuberos-kdc-0 /]# kadmin.local Authenticating as principal root/[email protected] with password. kadmin.local: list_principals K/[email protected] host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET [email protected] kadmin/[email protected] kadmin/[email protected] kadmin/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET kiprop/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.kerberos.svc.cluster.local@GODEVELOPER.NET [email protected] krbtgt/[email protected] kadmin.local:
Now that you have a server running, you need to deploy a client pod that will make proper use of it. The steps to do that are as follows:
-
Create the sidecar Docker image (must be run on the
app-server
):cd docker/sidecar docker build -f Dockerfile -t krb_sidecar:latest ./
-
Go to the
krb-client
directory, and examine the values in thevalues.yaml
file. Again, you shouldn't need to touch anything unless you modified things in the Docker creation or anything else -
Deploy the
krb-client
Helm chart:helm -n default-tenant install krb-client ./
This should generate a pod with 2 containers in it (client and sidecar)
Important: you must perform this operation after creating the k8s secret (in step 5 of server deployment), otherwise the keytab will not be mounted to the client and Kerberos authentication will fail
-
To verify that the sidecar is able to initiate work with Kerberos, look at its logs:
$ kubectl -n default-tenant logs -f krb-client-krb-client-client -c sidecar *** kinit at +2020-12-02 Using default cache: /tmp/ccache/krb5kdc_ccache Using principal: [email protected] Authenticated to Kerberos v5 Ticket cache: FILE:/tmp/ccache/krb5kdc_ccache Default principal: [email protected] Valid starting Expires Service principal 12/02/20 13:29:57 12/03/20 13:29:56 krbtgt/[email protected] *** Waiting for 3600 seconds
If a ticket is displayed (to
krbtgt/[email protected]
), then authentication was successful and you're good to go
Now the last step is to verify Kerberos works. We verify it by doing ssh
to the kadmin
container, using user iguazio
whose password we saved to the keytab
file.
The steps are:
-
Login to the
krbclient
container in the client podkubectl -n default-tenant exec -ti krb-client-krb-client-client -c krbclient -- /bin/bash
-
Verify that the sidecar indeed was able to share the ticket cache properly with this container:
$ klist Ticket cache: FILE:/tmp/ccache/krb5kdc_ccache Default principal: [email protected] Valid starting Expires Service principal 12/02/20 13:29:57 12/03/20 13:29:56 krbtgt/[email protected]
-
Perform
ssh
to the server:ssh iguazio@kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.default-tenant.svc.cluster.local
This should work without you specifying any password - if you need password, then something went totally wrong. To compare, try to login as user
krbtest
, which is a Kerberos principal but one whose password is not in thekeytab
:$ ssh krbtest@kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.default-tenant.svc.cluster.local krbtest@kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.default-tenant.svc.cluster.local's password:
It will ask for a password, which is expected.
-
If you now exit the
ssh
connection, and look at the Kerberos tickets on the client side, you'll see a new ticket for the server host besides theTGT
that was there earlier:$ klist Ticket cache: FILE:/tmp/ccache/krb5kdc_ccache Default principal: [email protected] Valid starting Expires Service principal 12/02/20 13:29:57 12/03/20 13:29:56 krbtgt/[email protected] 12/02/20 13:35:16 12/03/20 13:29:56 host/kuberos-kuberos-kdc-0.kuberos-kuberos-kdc.default-tenant.svc.cluster.local@EXAMPLE.COM
That's it! We have Kerberos working now. Much rejoice.