Modern programming often requires us to make strong guarantees about how our applications terminate. One way to do this is through the use of structured concurrency, which allows us to reason about parallel processes and how they relate to each other.
In this session, we will learn how to use Arrow’s Resource Domain-Specific Language (DSL) to reason about resource safety in the same way we use structured concurrency to avoid leaking resources. We will also see how this can be combined with KotlinX Coroutines to build complex use cases in a simple and elegant way using Kotlin DSLs.
Libraries used:
Project can only be build from Linux, because of the Postgres C API. (If you figure out how to build it on MacOs (Arm64), please let me know)
apt-get install libpq-dev
./gradlew build
docker build -t ktor-native-server:tag .
To run the deployment
tasks, you need a local Kubernetes environment. The demo uses Docker Desktop (non-commercial) license.
kubectl apply -f deployment/network.yaml
kubectl apply -f deployment/postgres.yaml
Be sure to update deployment/deploy.yaml
with the current ip address of postgres in your cluster.
Then you can apply the deploy.yaml
file to your cluster.
kubectl apply -f deployment/deploy.yaml
When the ktor-native-server
pod are running, you can run the post
and get
tests in apitest.http
.
Your kubectl get all
should look similar to this.
NAME READY STATUS RESTARTS AGE
pod/ktornative-5b8c6bcdb5-54chv 1/1 Running 0 28m
pod/ktornative-5b8c6bcdb5-bnx62 1/1 Running 0 28m
pod/ktornative-5b8c6bcdb5-lc26b 1/1 Running 0 28m
pod/ktornative-5b8c6bcdb5-tqcdf 1/1 Running 0 28m
pod/postgres-78d65bf67-txg5b 1/1 Running 0 33m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 120m
service/loadbalancer LoadBalancer 10.105.172.85 localhost 8080:30821/TCP 115m
service/postgres ClusterIP 10.107.64.239 <none> 5432/TCP 33m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ktornative 4/4 4 4 28m
deployment.apps/postgres 1/1 1 1 33m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ktornative-5b8c6bcdb5 4 4 4 28m
replicaset.apps/postgres-78d65bf67 1 1 1 33m
See the webinar for more details.
We start with a simple Ktor server, which we will deploy to Kubernetes.
We use a variation of the Hello World example for Ktor with Kotlin/Native, and perform a rolling update between two versions of the same code, code used in this step can be found in the step-1
branch.
Transactions: 2521 hits
Availability: 98.55 %
Elapsed time: 12.04 secs
We update the original code to use the SuspendApp
library, and Resource
from Arrow to close the embedded server gracefully.
Then we perform the same test again.
Transactions: 1620 hits
Availability: 99.39 %
Elapsed time: 22.93 secs
We use update the code to use SuspendApp Ktor integration server
instead of embeddedServer
,
this introduces a small delay
for the network (LoadBalancer/Ingress) to remove our pods from the IP tables.
This code can be found in the step-3
branch.
Additional references on why this is needed:
- AWS Load Balancer
- Kubernetes' dirty endpoint secret and Ingress
- Graceful shutdown and zero downtime deployments in Kubernetes
- The Gotchas of Zero-Downtime Traffic /w Kubernetes - Leigh Capili, Weaveworks
- Spring Boot - Investigate shutdown delay option
Transactions: 21618 hits
Availability: 100.00 %
Elapsed time: 38.04 secs
- Image on DockerHub: vergauwensimon/ktor-native-server:20230215-222243
We add Http
& Postgres
configuration using Kotlin's getenv
and Arrow Either
/Raise
to accumulate errors.
If you run deploy.yaml
with the env
section commented out,
you will see that the ktor-native-server
will fail to start showing all the environment variables as missing.
We uncomment env
variables, and set up NativePostgres
capturing the exception and turning it into a typed error, or it will make Kotlin Native hang in docker.
Using the NativePostgres
we can now wire our routes to the database, and our final result is a working server.
This code can be found on the main
branch.