Rust client for Kubernetes in the style of a more generic client-go. It makes certain assumptions about the kubernetes api to allow writing generic abstractions, and as such contains rust reinterpretations of Reflector
and Informer
to allow writing kubernetes controllers/watchers/operators more easily.
NB: This library is currently undergoing a lot of changes with async/await stabilizing. Please check the CHANGELOG when upgrading.
Select a version of kube
along with the generated k8s api types that corresponds to your cluster version:
[dependencies]
kube = "0.27.0"
k8s-openapi = { version = "0.7.1", default-features = false, features = ["v1_15"] }
Note that turning off default-features
for k8s-openapi
is recommended to speed up your compilation (and we provide an api anyway).
See the examples directory for how to watch over resources in a simplistic way.
See version-rs for a super light (~100 lines), actix*, prometheus, deployment api setup.
See controller-rs for a full actix* example, with circleci, and kube yaml.
NB: actix examples with futures are due for a rewrite with the current version.
The direct Api
type takes a client, and is constructed with either the ::global
or ::namespaced
functions:
use k8s_openapi::api::core::v1::Pod;
let pods: Api<Pod> = Api::namespaced(client, "default");
let p = pods.get("blog").await?;
println!("Got blog pod with containers: {:?}", p.spec.unwrap().containers);
let patch = json!({"spec": {
"activeDeadlineSeconds": 5
}});
let patched = pods.patch("blog", &pp, serde_json::to_vec(&patch)?).await?;
assert_eq!(patched.spec.active_deadline_seconds, Some(5));
pods.delete("blog", &DeleteParams::default()).await?;
See the examples ending in _api
examples for more detail.
The optional kube::runtime
module contains sets of higher level abstractions on top of the Api
and Resource
types so that you don't have to do all the watch book-keeping yourself.
A basic event watcher that presents a stream of api events on a resource with a given set of ListParams
. Events are received as a raw WatchEvent
type.
An Informer updates the last received resourceVersion
internally on every event, before shipping the event to the app. If your controller restarts, you will receive one event for every active object at startup, before entering a normal watch.
let r = Resource::all::<Pod>();
let inf = Informer::new(client, r);
The main feature of Informer<K>
is being able to subscribe to events while having a streaming .poll()
open:
let pods = inf.poll().await?.boxed(); // starts a watch and returns a stream
while let Some(event) = pods.try_next().await? { // await next event
handle(event).await?; // pass the WatchEvent to a handler
}
How you handle them is up to you, you could build your own state, you can use the Api
, or just print events. In this example you get complete Pod objects:
async fn handle(event: WatchEvent<Pod>) -> anyhow::Result<()> {
match event {
WatchEvent::Added(o) => {
let containers = o.spec.unwrap().containers.into_iter().map(|c| c.name).collect::<Vec<_>>();
println!("Added Pod: {} (containers={:?})", Meta::name(&o), containers);
},
WatchEvent::Modified(o) => {
let phase = o.status.unwrap().phase.unwrap();
println!("Modified Pod: {} (phase={})", Meta::name(&o), phase);
},
WatchEvent::Deleted(o) => {
println!("Deleted Pod: {}", Meta::name(&o));
},
WatchEvent::Error(e) => {
println!("Error event: {:?}", e);
}
}
Ok(())
}
The node_informer example has an example of using api calls from within event handlers.
A cache for K
that keeps itself up to date. It does not expose events, but you can inspect the state map at any time.
let r = Resource::namespaced::<Node>(&namespace);
let lp = ListParams::default()
.labels("beta.kubernetes.io/instance-type=m4.2xlarge");
let rf = Reflector::new(client, lp, r);
then you should poll()
the reflector, and state()
to get the current cached state:
rf.poll().await?; // watches + updates state
// Clone state and do something with it
rf.state().await.into_iter().for_each(|(node)| {
println!("Found Node {:?}", node);
});
Note that poll
holds the future for 290s by default, but you can (and should) get .state()
from another async context (see reflector examples for how to spawn an async task to do this). See also the self-driving issue.
If you need the details of just a single object, you can use the more efficient, Reflector::get
and Reflector::get_within
.
Examples that show a little common flows. These all have logging of this library set up to debug
, and where possible pick up on the NAMSEPACE
evar.
# watch pod events
cargo run --example pod_informer
# watch event events
cargo run --example event_informer
# watch for broken nodes
cargo run --example node_informer
or for the reflectors:
cargo run --example pod_reflector
cargo run --example node_reflector
cargo run --example deployment_reflector
cargo run --example secret_reflector
cargo run --example configmap_reflector
for one based on a CRD, you need to create the CRD first:
kubectl apply -f examples/foo.yaml
cargo run --example crd_reflector
then you can kubectl apply -f crd-baz.yaml -n default
, or kubectl delete -f crd-baz.yaml -n default
, or kubectl edit foos baz -n default
to verify that the events are being picked up.
For straight API use examples, try:
cargo run --example crd_api
cargo run --example job_api
cargo run --example log_stream
cargo run --example pod_api
NAMESPACE=dev cargo run --example log_stream -- kafka-manager-7d4f4bd8dc-f6c44
Kube has basic support for rustls as a replacement for the openssl
dependency. To use this, turn off default features, and enable rustls-tls
:
cargo run --example pod_informer --no-default-features --features=rustls-tls
or in Cargo.toml
:
[dependencies]
kube = { version = "0.27.0", default-features = false, features = ["rustls-tls"] }
k8s-openapi = { version = "0.7.1", default-features = false, features = ["v1_15"] }
This will pull in the variant of reqwest
that also uses its rustls-tls
feature.
Apache 2.0 licensed. See LICENSE for details.