Skip to content

Latest commit

 

History

History
 
 

pipelines

JFrog Pipelines on Kubernetes Helm Chart

JFrog Pipelines

Prerequisites Details

  • Kubernetes 1.12+

Chart Details

This chart will do the following:

  • Deploy PostgreSQL (optionally with an external PostgreSQL instance)
  • Deploy RabbitMQ (optionally as an HA cluster)
  • Deploy Redis (optionally as an HA cluster)
  • Deploy Vault (optionally with an external Vault instance)
  • Deploy JFrog Pipelines

Requirements

  • A running Kubernetes cluster
    • Dynamic storage provisioning enabled
    • Default StorageClass set to allow services using the default StorageClass for persistent storage
  • A running Artifactory 7.11.x with Enterprise+ License
  • Kubectl installed and setup to use the cluster
  • Helm v2 or v3 installed

Install JFrog Pipelines

Add JFrog Helm repository

Before installing JFrog helm charts, you need to add the JFrog helm repository to your helm client

helm repo add jfrog https://charts.jfrog.io
helm repo update

Artifactory Connection Details

In order to connect Pipelines to your Artifactory installation, you have to use a Join Key, hence it is MANDATORY to provide a Join Key, jfrogUrl and jfrogUrlUI to your Pipelines installation. Here's how you do that:

Retrieve the connection details of your Artifactory installation, from the UI - https://www.jfrog.com/confluence/display/JFROG/General+Security+Settings#GeneralSecuritySettings-ViewingtheJoinKey.

pipelines:
  ## Artifactory URL - Mandatory
  ## If Artifactory and Pipelines are in same namespace, jfrogUrl is Artifactory service name, otherwise its external URL of Artifactory
  jfrogUrl: ""
  ## Artifactory UI URL - Mandatory
  ## This must be the external URL of Artifactory, for example: https://artifactory.example.com
  jfrogUrlUI: ""

  ## Join Key to connect to Artifactory
  ## IMPORTANT: You should NOT use the example joinKey for a production deployment!
  joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
  ## Alternatively, you can use a pre-existing secret with a key called join-key by specifying joinKeySecretName
  ## Note: This feature is available on pipelines app version 1.9.x and later
  # joinKeySecretName:

  ## Pipelines requires a unique master key
  ## You can generate one with the command: "openssl rand -hex 32"
  ## IMPORTANT: You should NOT use the example masterKey for a production deployment!
  masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
  ## Alternatively, you can use a pre-existing secret with a key called master-key by specifying masterKeySecretName
  ## Note: This feature is available on pipelines app version 1.9.x and later
  # masterKeySecretName:

Install Pipelines Chart with Ingress

Pre-requisites

Before deploying Pipelines you need to have the following

Prepare configurations

Fetch the JFrog Pipelines helm chart to get the needed configuration files

helm fetch jfrog/pipelines --untar

Edit local copies of values-ingress.yaml, values-ingress-passwords.yaml and values-ingress-external-secret.yaml with the needed configuration values

  • URLs in values-ingress.yaml
    • Artifactory URL
    • Ingress hosts
    • Ingress tls secrets
  • Passwords uiUserPassword, postgresqlPassword and auth.password must be set, and same for masterKey and joinKey in values-ingress-passwords.yaml

Install JFrog Pipelines

Install JFrog Pipelines

kubectl create ns pipelines
helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f pipelines/values-ingress.yaml -f pipelines/values-ingress-passwords.yaml

Special Upgrade Notes

While upgrading from Pipelines chart version 1.x to 2.x and above, due to breaking changes in rabbitmq subchart (6.x to 7.x chart version when rabbitmq.enabled=true) and postgresql subchart(8.x to 9.x chart version when postgresql.enabled=true) please run below manual commands (downtime is required)

Note: Also, Make sure all existing pipelines build runs are completed (rabbitmq queues are empty) when you start an upgrade

Important: This is a breaking change from 6.x to 7.x (chart versions) of Rabbitmq chart - Please refer here

RabbitMQ password configuration in the Values.yaml has changed from rabbit.rabbit.password to rabbit.auth.password

kubectl --namespace <namespace> delete statefulsets <release_name>-pipelines-services
kubectl --namespace <namespace> delete statefulsets <release_name>-pipelines-vault
kubectl --namespace <namespace> delete statefulsets <release_name>-postgresql
kubectl --namespace <namespace> delete statefulsets <release_name>-rabbitmq
kubectl --namespace <namespace> delete pvc data-<release_name>-rabbitmq-0
helm upgrade --install pipelines --namespace <namespace> jfrog/pipelines

Use external secret

Note: Best practice is to use external secrets instead of storing passwords in values.yaml files.

Don't forget to update URLs in values-ingress-external-secret.yaml file.

Fill in all required passwords, masterKey and joinKey in values-ingress-passwords.yaml and then create and install the external secret.

Note: Helm release name for secrets generation and helm install must be set the same, in this case it is pipelines.

With Helm v2:

## Generate pipelines-system-yaml secret
helm template --name-template pipelines pipelines/ -x templates/pipelines-system-yaml.yaml \
    -f pipelines/values-ingress-external-secret.yaml -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -

## Generate pipelines-database secret
helm template --name-template pipelines pipelines/ -x templates/database-secret.yaml \
    -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -

## Generate pipelines-rabbitmq-secret secret
helm template --name-template pipelines pipelines/ -x templates/rabbitmq-secret.yaml \
    -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -

With Helm v3:

## Generate pipelines-system-yaml secret
helm template --name-template pipelines pipelines/ -s templates/pipelines-system-yaml.yaml \
    -f pipelines/values-ingress-external-secret.yaml -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -

## Generate pipelines-database secret
helm template --name-template pipelines pipelines/ -s templates/database-secret.yaml \
    -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -

## Generate pipelines-rabbitmq-secret secret
helm template --name-template pipelines pipelines/ -s templates/rabbitmq-secret.yaml \
    -f pipelines/values-ingress-passwords.yaml | kubectl apply --namespace pipelines -f -

Install JFrog Pipelines:

helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-ingress-external-secret.yaml

Using external Rabbitmq

If you want to use external Rabbitmq, set rabbitmq.enabled=false and create values-external-rabbitmq.yaml with below yaml configuration

rabbitmq:
  enabled: false
  internal_ip: "{{ .Release.Name }}-rabbitmq"
  msg_hostname: "{{ .Release.Name }}-rabbitmq"
  port: 5672
  manager_port: 15672
  ms_username: admin
  ms_password: password
  cp_username: admin
  cp_password: password
  build_username: admin
  build_password: password    
  root_vhost_exchange_name: rootvhost
  erlang_cookie: secretcookie
  build_vhost_name: pipelines
  root_vhost_name: pipelinesRoot
  protocol: amqp
helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-rabbitmq.yaml

Using external Postgresql

If you want to use external postgresql, set postgresql.enabled=false and create values-external-postgresql.yaml with below yaml configuration

global:
  # Internal Postgres must be set to false
  postgresql:
    user: db_username
    password: db_user_password
    host: db_host
    port: 5432
    database: db_name
    ssl: false / true
postgresql:
  enabled: false

Make sure User db_username and database db_name exists before running helm install / upgrade

helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-postgresql.yaml

Using external Vault

If you want to use external Vault, set vault.enabled=false and create values-external-vault.yaml with below yaml configuration

vault:
  enabled: false

global:
  vault:
    ## Vault url examples
    # external one: https://vault.example.com
    # internal one running in the same Kubernetes cluster: http://vault-active:8200
    url: vault_url
    token: vault_token
    ## Set Vault token using existing secret
    # existingSecret: vault-secret

If you store external Vault token in a pre-existing Kubernetes Secret, you can specify it via existingSecret.

To create a secret containing the Vault token:

kubectl create secret generic vault-secret --from-literal=token=${VAULT_TOKEN}
helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-vault.yaml

Using an external systemYaml with existingSecret

This is for advanced usecases where users wants to provide their own systemYaml for configuring pipelines. Refer - https://www.jfrog.com/confluence/display/JFROG/Pipelines+System+YAML Note: This will override existing systemYaml in values.yaml.

systemYamlOverride:
## You can use a pre-existing secret by specifying existingSecret
  existingSecret:
## The dataKey should be the name of the secret data key created.
  dataKey:

Note: From chart version 2.2.0 and above .Values.existingSecret is changed to .Values.systemYaml.existingSecret and .Values.systemYaml.dataKey. Note: From chart version 2.3.7 and above .Values.systemYaml is changed to .Values.systemYamlOverride.

helm upgrade --install pipelines --namespace pipelines jfrog/pipelines -f values-external-systemyaml.yaml

Using Vault in Production environments

To use vault securely you must set the disablemlock setting in the values.yaml to false as per the Hashicorp Vault recommendations here:

https://www.vaultproject.io/docs/configuration#disable_mlock

For non-prod environments it is acceptable to leave this value set to true.

Note however this does enable a potential security issue where encrypted credentials could potentially be swapped onto an unencrypted disk.

For this reason we recommend you always set this value to false to ensure mlock is enabled.

Non-Prod environments:

vault:
  disablemlock: true

Production environments:

vault:
  disablemlock: false

Status

See the status of deployed helm release:

With Helm v2:

helm status pipelines

With Helm v3:

helm status pipelines --namespace pipelines

Pipelines Version

  • By default, the pipelines images will use the value appVersion in the Chart.yml. This can be over-ridden by adding version to the pipelines section of the values.yml

Build Plane

Build Plane with static and dynamic node-pool VMs

To start using Pipelines you need to setup a Build Plane:

Establishing TLS and Adding certificates

Create trust between the nodes by copying the ca.crt from the Artifactory server under $JFROG_HOME/artifactory/var/etc/access/keys to of the nodes you would like to set trust with under $JFROG_HOME//var/etc/security/keys/trusted. For more details, Please refer here.

We can have more than one certificates to be present in the trusted directory

For example pipelines api url can be configured behind a load balancer which

is setup with custom certificates. We will need those certificates as well in

the trusted folder. As build nodes will be talking to pipelines API over load

balancer end point.

NODE_EXTRA_CA_CERTS env is added when the customer is using custom certifactes

Pipelines looks through all the certificates present in the trusted folder and

concat those into a single file called pipeline_custom_certs.crt which is then

passed as an env

Tls certificates can be added by using kubernetes secret. The secret should be created outside of this chart and provided using the tag .Values.pipelines.customCertificates.certificateSecretName. Please refer the example below.

kubectl create secret generic ca-cert --from-file=ca.crt=ca.crt

And then pass it to the helm installation

pipelines:
  customCertificates:
    enabled: true
    certificateSecretName: ca-cert

Useful links