This section contains advanced information describing the different ways you can run and manage RKE2.
By default, certificates in RKE2 expire in 12 months.
If the certificates are expired or have fewer than 90 days remaining before they expire, the certificates are rotated when RKE2 is restarted.
Any file found in /var/lib/rancher/rke2/server/manifests
will automatically be deployed to Kubernetes in a manner similar to kubectl apply
.
For information about deploying Helm charts using the manifests directory, refer to the section about Helm.
RKE2 will generate the config.toml
for containerd in /var/lib/rancher/rke2/agent/etc/containerd/config.toml
.
For advanced customization of this file you can create another file called config.toml.tmpl
in the same directory and it will be used instead.
The config.toml.tmpl
will be treated as a Go template file, and the config.Node
structure is being passed to the template. See this template for an example of how to use the structure to customize the configuration file.
If you are running RKE2 in an environment, which only has external connectivity through an HTTP proxy, you can configure your proxy settings on the RKE2 systemd service. These proxy settings will then be used in RKE2 and passed down to the embedded containerd and kubelet.
Add the necessary HTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
variables to the environment file of your systemd service, usually:
/etc/default/rke2-server
/etc/default/rke2-agent
The NO_PROXY
variable must include your internal networks, as well as the cluster pod and service IP ranges.
HTTP_PROXY=http://your-proxy.example.com:8888
HTTPS_PROXY=http://your-proxy.example.com:8888
NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
If you want to configure the proxy settings for containerd without affecting RKE2 and the Kubelet, you can prefix the variables with CONTAINERD_
:
CONTAINERD_HTTP_PROXY=http://your-proxy.example.com:8888
CONTAINERD_HTTPS_PROXY=http://your-proxy.example.com:8888
CONTAINERD_NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
RKE2 supports encrypting Secrets at rest, and will do the following automatically:
- Generate an AES-CBC key
- Generate an encryption config file with the generated key:
{
"kind": "EncryptionConfiguration",
"apiVersion": "apiserver.config.k8s.io/v1",
"resources": [
{
"resources": [
"secrets"
],
"providers": [
{
"aescbc": {
"keys": [
{
"name": "aescbckey",
"secret": "xxxxxxxxxxxxxxxxxxx"
}
]
}
},
{
"identity": {}
}
]
}
]
}
- Pass the config to the Kubernetes APIServer as encryption-provider-config
Once enabled any created secret will be encrypted with this key. Note that if you disable encryption then any encrypted secrets will not be readable until you enable encryption again using the same key.
RKE2 agents can be configured with the options node-label
and node-taint
which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, and can only be added once and not removed after that through rke2 commands.
If you want to change node labels and taints after node registration you should use kubectl
. Refer to the official Kubernetes documentation for details on how to add taints and node labels.
Agent nodes are registered via a websocket connection initiated by the rke2 agent
process, and the connection is maintained by a client-side load balancer running as part of the agent process.
Agents register with the server using the cluster secret portion of the join token, along with a randomly generated node-specific password, which is stored on the agent at /etc/rancher/node/password
. The server will store the passwords for individual nodes as Kubernetes secrets, and any subsequent attempts must use the same password. Node password secrets are stored in the kube-system
namespace with names using the template <host>.node-password.rke2
. These secrets are deleted when the corresponding Kubernetes node is deleted.
Note: Prior to RKE2 v1.20.2 servers stored passwords on disk at /var/lib/rancher/rke2/server/cred/node-passwd
.
If the /etc/rancher/node
directory of an agent is removed, the password file should be recreated for the agent prior to startup, or the entry removed from the server or Kubernetes cluster (depending on the RKE2 version).
A unique node ID can be appended to the hostname by launching RKE2 servers or agents using the --with-node-id
flag.
The installation script provides units for systemd, but does not enable or start the service by default.
When running with systemd, logs will be created in /var/log/syslog
and viewed using journalctl -u rke2-server
or journalctl -u rke2-agent
.
An example of installing with the install script:
curl -sfL https://get.rke2.io | sh -
systemctl enable rke2-server
systemctl start rke2-server
The server charts bundled with rke2
deployed during cluster bootstrapping can be disabled and replaced with alternatives. A common use case is replacing the bundled rke2-ingress-nginx
chart with an alternative.
To disable any of the bundled system charts, set the disable
parameter in the config file before bootstrapping. The full list of system charts to disable is below:
rke2-canal
rke2-coredns
rke2-ingress-nginx
rke2-kube-proxy
rke2-metrics-server
Note that it is the cluster operator's responsibility to ensure that components are disabled or replaced with care, as the server charts play important roles in cluster operability. Refer to the architecture overview for more information on the individual system charts role within the cluster.
In public AWS regions, installing RKE2 with --cloud-provider-name=aws
will ensure RKE2 is cloud-enabled, and capable of auto-provisioning certain cloud resources.
When installing RKE2 on classified regions (such as SC2S or C2S), there are a few additional pre-requisites to be aware of to ensure RKE2 knows how and where to securely communicate with the appropriate AWS endpoints:
-
Ensure all the common AWS cloud-provider prerequisites are met. These are independent of regions and are always required.
-
Ensure RKE2 knows where to send API requests for
ec2
andelasticloadbalancing
services by creating acloud.conf
file, the below is an example for theus-iso-east-1
(C2S) region:
# /etc/rancher/rke2/cloud.conf
[Global]
[ServiceOverride "ec2"]
Service=ec2
Region=us-iso-east-1
URL=https://ec2.us-iso-east-1.c2s.ic.gov
SigningRegion=us-iso-east-1
[ServiceOverride "elasticloadbalancing"]
Service=elasticloadbalancing
Region=us-iso-east-1
URL=https://elasticloadbalancing.us-iso-east-1.c2s.ic.gov
SigningRegion=us-iso-east-1
Alternatively, if you are using private AWS endpoints, ensure the appropriate URL
is used for each of the private endpoints.
- Ensure the appropriate AWS CA bundle is loaded into the system's root ca trust store. This may already be done for you depending on the AMI you are using.
# on CentOS/RHEL 7/8
cp <ca.pem> /etc/pki/ca-trust/source/anchors/
update-ca-trust
- configure RKE2 to use the
aws
cloud-provider with the customcloud.conf
created in step 1:
# /etc/rancher/rke2/config.yaml
...
cloud-provider-name: aws
cloud-provider-config: "/etc/rancher/rke2/cloud.conf"
...
-
Install RKE2 normally (most likely in an airgapped capacity)
-
Validate successful installation by confirming the existence of AWS metadata on cluster node labels with
kubectl get nodes --show-labels
The following options are available under the server
sub-command for RKE2. The options allow for specifying CPU requests and limits for the control plane components within RKE2.
--control-plane-resource-requests value (components) Control Plane resource requests [$RKE2_CONTROL_PLANE_RESOURCE_REQUESTS]
--control-plane-resource-limits value (components) Control Plane resource limits [$RKE2_CONTROL_PLANE_RESOURCE_LIMITS]
Values are a comma-delimited list of [controlplane-component]-(cpu|memory)=[desired-value]
. The possible values for controlplane-component
are:
kube-apiserver
kube-scheduler
kube-controller-manager
kube-proxy
etcd
cloud-controller-manager
Thus, an example --control-plane-resource-requests
or --control-plane-resource-limits
value may look like:
kube-apiserver-cpu=500m,kube-apiserver-memory=512M,kube-scheduler-cpu=250m,kube-scheduler-memory=512M,etcd-cpu=1000m
The unit values for CPU/memory are identical to Kubernetes resource units (See: Resource Limits in Kubernetes)
The following options are available under the server
sub-command for RKE2. These options specify host-path mounting of directories from the node filesystem into the static pod component that corresponds to the prefixed name.
--kube-apiserver-extra-mount value (components) kube-apiserver extra volume mounts [$RKE2_KUBE_APISERVER_EXTRA_MOUNT]
--kube-scheduler-extra-mount value (components) kube-scheduler extra volume mounts [$RKE2_KUBE_SCHEDULER_EXTRA_MOUNT]
--kube-controller-manager-extra-mount value (components) kube-controller-manager extra volume mounts [$RKE2_KUBE_CONTROLLER_MANAGER_EXTRA_MOUNT]
--kube-proxy-extra-mount value (components) kube-proxy extra volume mounts [$RKE2_KUBE_PROXY_EXTRA_MOUNT]
--etcd-extra-mount value (components) etcd extra volume mounts [$RKE2_ETCD_EXTRA_MOUNT]
--cloud-controller-manager-extra-mount value (components) cloud-controller-manager extra volume mounts [$RKE2_CLOUD_CONTROLLER_MANAGER_EXTRA_MOUNT]
/source/volume/path/on/host:/destination/volume/path/in/staticpod
In order to mount a volume as read only, append :ro
to the end of the volume mount.
/source/volume/path/on/host:/destination/volume/path/in/staticpod:ro
Multiple volume mounts can be specified for the same component by passing the flag values as an array in the config file.
The following options are available under the server
sub-command for RKE2. These options specify additional environment variables in standard format i.e. KEY=VALUE
for the static pod component that corresponds to the prefixed name.
--kube-apiserver-extra-env value (components) kube-apiserver extra environment variables [$RKE2_KUBE_APISERVER_EXTRA_ENV]
--kube-scheduler-extra-env value (components) kube-scheduler extra environment variables [$RKE2_KUBE_SCHEDULER_EXTRA_ENV]
--kube-controller-manager-extra-env value (components) kube-controller-manager extra environment variables [$RKE2_KUBE_CONTROLLER_MANAGER_EXTRA_ENV]
--kube-proxy-extra-env value (components) kube-proxy extra environment variables [$RKE2_KUBE_PROXY_EXTRA_ENV]
--etcd-extra-env value (components) etcd extra environment variables [$RKE2_ETCD_EXTRA_ENV]
--cloud-controller-manager-extra-env value (components) cloud-controller-manager extra environment variables [$RKE2_CLOUD_CONTROLLER_MANAGER_EXTRA_ENV]
Multiple environment variables can be specified for the same component by passing the flag values as an array in the config file.