Skip to content

Latest commit

 

History

History
 
 

xray

JFrog Xray HA on Kubernetes Helm Chart

Prerequisites Details

  • Kubernetes 1.8+

Chart Details

This chart will do the following:

  • Optionally deploy PostgreSQL, MongoDB
  • Deploy RabbitMQ (optionally as an HA cluster)
  • Deploy JFrog Xray micro-services

Requirements

  • A running Kubernetes cluster
    • Dynamic storage provisioning enabled
    • Default StorageClass set to allow services using the default StorageClass for persistent storage
  • A running Artifactory
  • Kubectl installed and setup to use the cluster
  • Helm installed and setup to use the cluster (helm init)

Install JFrog Xray

Add JFrog Helm repository

Before installing JFrog helm charts, you need to add the JFrog helm repository to your helm client

helm repo add jfrog https://charts.jfrog.io

Install Chart

Install JFrog Xray

helm install -n xray jfrog/xray

Status

See the status of your deployed helm releases

helm status xray

Upgrade

To upgrade an existing Xray, you still use helm

# Update existing deployed version to 2.1.2
helm upgrade --set common.xrayVersion=2.1.2 jfrog/xray

If Xray was installed without providing a value to postgresql.postgresPassword (a password was autogenerated), follow these instructions:

  1. Get the current password by running:
POSTGRES_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode)
  1. Upgrade the release by passing the previously auto-generated secret:
helm upgrade <myrelease> jfrog/xray --set postgresql.postgresPassword=${POSTGRES_PASSWORD}

If Xray was installed without providing a value to rabbitmq.rabbitmqPassword/rabbitmq-ha.rabbitmqPassword (a password was autogenerated), follow these instructions:

  1. Get the current password by running:
RABBITMQ_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
  1. Upgrade the release by passing the previously auto-generated secret:
helm upgrade <myrelease> jfrog/xray --set rabbitmq.rabbitmqPassword=${RABBITMQ_PASSWORD}/rabbitmq-ha.rabbitmqPassword=${RABBITMQ_PASSWORD}

If Xray was installed without providing a value to mongodb.mongodbPassword and mongodb.mongodbRootPassword (a password was autogenerated), follow these instructions:

  1. Get the current password by running:
MONGODB_ROOT_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
MONGODB_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-mongodb -o jsonpath="{.data.mongodb-password}" | base64 --decode)
  1. Upgrade the release by passing the previously auto-generated secret:
helm upgrade <myrelease> jfrog/xray --set mongodb.mongodbRootPassword=${MONGODB_ROOT_PASSWORD} --set mongodb.mongodbPassword=${MONGODB_PASSWORD}

If Xray was installed with all of the default values (e.g. with no user-provided values for mongo/rabbit/postgres), follow these steps:

  1. Retrieve all current passwords (rabbitmq/postgresql/mongodb) as explained in the above section.
  2. Upgrade the release by passing the previously auto-generated secrets:
helm upgrade --name xray jfrog/xray --set mongodb.mongodbRootPassword=<mongo-root-password> --set mongodb.mongodbPassword=<mongo-password> --set rabbitmq-ha.rabbitmqPassword=<rabbit-password> --set postgresql.postgresPassword=<postgres-password>

Remove

Removing a helm release is done with

# Remove the Xray services and data tools
helm delete --purge xray

# Remove the data disks
kubectl delete pvc -l release=xray

Create a unique Master Key

JFrog Xray requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set in values.yaml (common.masterKey).

This key is for demo purpose and should not be used in a production environment!

You should generate a unique one and pass it to the template at install/upgrade time.

# Create a key
export MASTER_KEY=$(openssl rand -hex 32)
echo ${MASTER_KEY}

# Pass the created master key to helm
helm install --set common.masterKey=${MASTER_KEY} -n xray jfrog/xray

NOTE: Make sure to pass the same master key with --set common.masterKey=${MASTER_KEY} on all future calls to helm install and helm upgrade!

Special deployments

This is a list of special use cases for non-standard deployments

High Availability

For high availability of Xray, set the replica count per service be equal or higher than 2. Recommended is 3.

It is highly recommended to also set RabbitMQ to run as an HA cluster.

# Start Xray with 3 replicas per service and 3 replicas for RabbitMQ
helm install -n xray --set analysis.replicaCount=3,server.replicaCount=3,indexer.replicaCount=3,persist.replicaCount=3,rabbitmq-ha.replicaCount=3 jfrog/xray

External Databases

There is an option to use external database services (MongoDB or PostgreSQL) for your Xray.

MongoDB

To use an external MongoDB, You need to set Xray MongoDB connection URL.

For this, pass the parameter: mongodb.enabled=false and global.mongoUrl=${XRAY_MONGODB_CONN_URL}.

IMPORTANT: Make sure the DB is already created before deploying Xray services

# Passing a custom MongoDB to Xray

# Example
# MongoDB host: custom-mongodb.local
# MongoDB port: 27017
# MongoDB user: xray
# MongoDB password: password1_X

export XRAY_MONGODB_CONN_URL='mongodb://${MONGODB_USER}:${MONGODB_PASSWORD}@custom-mongodb.local:27017/?authSource=${MONGODB_DATABASE}&authMechanism=SCRAM-SHA-1'
helm install -n xray \
    --set mongodb.enabled=false \
    --set global.mongoUrl="${XRAY_MONGODB_CONN_URL}" \
    jfrog/xray

PostgreSQL

To use an external PostgreSQL, You need to disable the use of the bundled PostgreSQL and set a custom PostgreSQL connection URL.

For this, pass the parameters: postgresql.enabled=false and global.postgresqlUrl=${XRAY_POSTGRESQL_CONN_URL}.

IMPORTANT: Make sure the DB is already created before deploying Xray services

# Passing a custom PostgreSQL to Xray

# Example
# PostgreSQL host: custom-postgresql.local
# PostgreSQL port: 5432
# PostgreSQL user: xray
# PostgreSQL password: password2_X

export XRAY_POSTGRESQL_CONN_URL='postgres://${POSTGRESQL_USER}:${POSTGRESQL_PASSWORD}@custom-postgresql.local:5432/${POSTGRESQL_DATABASE}?sslmode=disable'
helm install -n xray \
    --set postgresql.enabled=false \
    --set global.postgresqlUrl="${XRAY_POSTGRESQL_CONN_URL}" \
    jfrog/xray

Logger sidecars

This chart provides the option to add sidecars to tail various logs from Xray containers. See the available values in values.yaml

Get list of containers in the pod

kubectl get pods -n <NAMESPACE> <POD_NAME> -o jsonpath='{.spec.containers[*].name}' | tr ' ' '\n'

View specific log

kubectl logs -n <NAMESPACE> <POD_NAME> -c <LOG_CONTAINER_NAME>

Custom init containers

There are cases where a special, unsupported init processes is needed like checking something on the file system or testing something before spinning up the main container.

For this, there is a section for writing a custom init container in the values.yaml. By default it's commented out

common:
  ## Add custom init containers
  customInitContainers: |
    ## Init containers template goes here ##

Configuration

The following table lists the configurable parameters of the xray chart and their default values.

Parameter Description Default
imagePullSecrets Docker registry pull secret
imagePullPolicy Container pull policy IfNotPresent
initContainerImage Init container image alpine:3.6
serviceAccount.create Specifies whether a ServiceAccount should be created true
serviceAccount.name The name of the ServiceAccount to create Generated using the fullname template
rbac.create Specifies whether RBAC resources should be created true
rbac.role.rules Rules to create []
ingress.enabled If true, Xray Ingress will be created false
ingress.annotations Xray Ingress annotations {}
ingress.hosts Xray Ingress hostnames []
ingress.tls Xray Ingress TLS configuration (YAML) []
ingress.defaultBackend.enabled If true, the default backend will be added using serviceName and servicePort true
ingress.labels Xray Ingress labels {}
postgresql.enabled Use enclosed PostgreSQL as database true
postgresql.postgresDatabase PostgreSQL database name xraydb
postgresql.postgresUser PostgreSQL database user xray
postgresql.postgresPassword PostgreSQL database password
postgresql.postgresConfig.maxConnections PostgreSQL max_connections parameter 500
postgresql.persistence.enabled PostgreSQL use persistent storage true
postgresql.persistence.size PostgreSQL persistent storage size 50Gi
postgresql.persistence.existingClaim PostgreSQL use existing persistent storage
postgresql.service.port PostgreSQL database port 5432
postgresql.resources.requests.memory PostgreSQL initial memory request
postgresql.resources.requests.cpu PostgreSQL initial cpu request
postgresql.resources.limits.memory PostgreSQL memory limit
postgresql.resources.limits.cpu PostgreSQL cpu limit
postgresql.nodeSelector PostgreSQL node selector {}
postgresql.affinity PostgreSQL node affinity {}
postgresql.tolerations PostgreSQL node tolerations []
mongodb.enabled Enable Mongodb true
mongodb.image.tag Mongodb docker image tag 3.6.3
mongodb.image.pullPolicy Mongodb Container pull policy IfNotPresent
mongodb.persistence.enabled Mongodb persistence volume enabled true
mongodb.persistence.existingClaim Use an existing PVC to persist data nil
mongodb.persistence.storageClass Storage class of backing PVC generic
mongodb.persistence.size Mongodb persistence volume size 50Gi
mongodb.livenessProbe.initialDelaySeconds Mongodb delay before liveness probe is initiated
mongodb.readinessProbe.initialDelaySeconds Mongodb delay before readiness probe is initiated
mongodb.mongodbExtraFlags MongoDB additional command line flags ["--wiredTigerCacheSizeGB=1"]
mongodb.mongodbDatabase Mongodb Database for Xray xray
mongodb.mongodbRootPassword Mongodb Database Password for root user
mongodb.mongodbUsername Mongodb Database Xray User admin
mongodb.mongodbPassword Mongodb Database Password for Xray User
mongodb.nodeSelector Mongodb node selector {}
mongodb.affinity Mongodb node affinity {}
mongodb.tolerations Mongodb node tolerations []
rabbitmq.enabled RabbitMQ enabled uses rabbitmq false
rabbitmq.rabbitmqErlangCookie RabbitMQ Erlang cookie XRAYRABBITMQCLUSTER
rabbitmq.rabbitmqMemoryHighWatermark RabbitMQ Memory high watermark 500MB
rabbitmq.rabbitmqUsername RabbitMQ application username user
rabbitmq.rabbitmqNodePort RabbitMQ node port 5672
rabbitmq.persistentVolume.enabled If true, persistent volume claims are created true
rabbitmq.persistentVolume.size RabbitMQ Persistent volume size 20Gi
rabbitmq.rbac.create If true, create & use RBAC resources true
rabbitmq-ha.enabled RabbitMQ enabled uses rabbitmq-ha true
rabbitmq-ha.replicaCount RabbitMQ Number of replica 1
rabbitmq-ha.rabbitmqUsername RabbitMQ application username guest
rabbitmq-ha.rabbitmqPassword RabbitMQ application password
rabbitmq-ha.rabbitmqErlangCookie RabbitMQ Erlang cookie XRAYRABBITMQCLUSTER
rabbitmq-ha.rabbitmqMemoryHighWatermark RabbitMQ Memory high watermark 500MB
rabbitmq-ha.persistentVolume.enabled If true, persistent volume claims are created true
rabbitmq-ha.persistentVolume.size RabbitMQ Persistent volume size 20Gi
rabbitmq-ha.rbac.create If true, create & use RBAC resources true
rabbitmq-ha.nodeSelector RabbitMQ node selector {}
rabbitmq-ha.tolerations RabbitMQ node tolerations []
logger.image.repository Repository for logger image busybox
logger.image.tag Tag for logger image 1.30
common.xrayVersion Xray image tag .Chart.AppVersion
common.xrayConfigPath Xray config path /var/opt/jfrog/xray/data
common.xrayUserId Xray User Id 1035
common.xrayGroupId Xray Group Id 1035
common.masterKey Xray Master Key Can be generated with openssl rand -hex 32 FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
common.customInitContainers Custom init containers
common.xrayConfig Additional xray yaml configuration to be written to xray_config.yaml file ``
global.mongoUrl Xray external MongoDB URL
global.postgresqlUrl Xray external PostgreSQL URL
analysis.name Xray Analysis name xray-analysis
analysis.image Xray Analysis container image docker.bintray.io/jfrog/xray-analysis
analysis.replicaCount Xray Analysis replica count 1
analysis.updateStrategy Xray Analysis update strategy RollingUpdate
analysis.podManagementPolicy Xray Analysis pod management policy Parallel
analysis.internalPort Xray Analysis internal port 7000
analysis.externalPort Xray Analysis external port 7000
analysis.service.type Xray Analysis service type ClusterIP
analysis.persistence.size Xray Analysis storage size limit 10Gi
analysis.resources Xray Analysis resources {}
analysis.loggers Xray Analysis loggers (see values.yaml for possible values)
analysis.nodeSelector Xray Analysis node selector {}
analysis.affinity Xray Analysis node affinity {}
analysis.tolerations Xray Analysis node tolerations []
indexer.name Xray Indexer name xray-indexer
indexer.image Xray Indexer container image docker.bintray.io/jfrog/xray-indexer
indexer.replicaCount Xray Indexer replica count 1
indexer.updateStrategy Xray Indexer update strategy RollingUpdate
indexer.podManagementPolicy Xray Indexer pod management policy Parallel
indexer.internalPort Xray Indexer internal port 7002
indexer.externalPort Xray Indexer external port 7002
indexer.service.type Xray Indexer service type ClusterIP
indexer.persistence.existingClaim Provide an existing PersistentVolumeClaim nil
indexer.persistence.storageClass Storage class of backing PVC nil (uses default storage class annotation)
indexer.persistence.enabled Xray Indexer persistence volume enabled false
indexer.persistence.accessMode Xray Indexer persistence volume access mode ReadWriteOnce
indexer.persistence.size Xray Indexer persistence volume size 50Gi
indexer.resources Xray Indexer resources {}
indexer.loggers Xray Indexer loggers (see values.yaml for possible values)
indexer.nodeSelector Xray Indexer node selector {}
indexer.affinity Xray Indexer node affinity {}
indexer.tolerations Xray Indexer node tolerations []
persist.name Xray Persist name xray-persist
persist.image Xray Persist container image docker.bintray.io/jfrog/xray-persist
persist.replicaCount Xray Persist replica count 1
persist.updateStrategy Xray Persist update strategy RollingUpdate
persist.podManagementPolicy Xray Persist pod management policy Parallel
persist.internalPort Xray Persist internal port 7003
persist.externalPort Xray Persist external port 7003
persist.service.type Xray Persist service type ClusterIP
persist.persistence.size Xray Persist storage size limit 10Gi
persist.loggers Xray Persist loggers (see values.yaml for possible values)
persist.resources Xray Persist resources {}
persist.nodeSelector Xray Persist node selector {}
persist.affinity Xray Persist node affinity {}
persist.tolerations Xray Persist node tolerations []
server.name Xray server name xray-server
server.image Xray server container image docker.bintray.io/jfrog/xray-server
server.replicaCount Xray server replica count 1
server.updateStrategy Xray server update strategy RollingUpdate
server.podManagementPolicy Xray server pod management policy Parallel
server.internalPort Xray server internal port 8000
server.externalPort Xray server external port 80
server.service.name Xray server service name xray
server.service.type Xray server service type LoadBalancer
server.service.annotations Xray server service annotations {}
server.persistence.existingClaim Provide an existing PersistentVolumeClaim nil
server.persistence.storageClass Storage class of backing PVC nil (uses default storage class annotation)
server.persistence.enabled Xray server persistence volume enabled false
server.persistence.accessMode Xray server persistence volume access mode ReadWriteOnce
server.persistence.size Xray server persistence volume size 50Gi
server.loggers Xray server loggers (see values.yaml for possible values)
server.resources Xray server resources {}
server.nodeSelector Xray server node selector {}
server.affinity Xray server node affinity {}
server.tolerations Xray server node tolerations []

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Ingress and TLS

To get Helm to create an ingress object with a hostname, add these two lines to your Helm command:

helm install --name xray \
  --set ingress.enabled=true \
  --set ingress.hosts[0]="xray.company.com" \
  --set server.service.type=NodePort \
  jfrog/xray

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. cert-manager), please refer to the documentation for that mechanism.

To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:

kubectl create secret tls xray-tls --cert=path/to/tls.cert --key=path/to/tls.key

Include the secret's name, along with the desired hostnames, in the Xray Ingress TLS section of your custom values.yaml file:

  ingress:
    ## If true, Xray Ingress will be created
    ##
    enabled: true

    ## Xray Ingress hostnames
    ## Must be provided if Ingress is enabled
    ##
    hosts:
      - xray.domain.com
    annotations:
      kubernetes.io/tls-acme: "true"
    ## Xray Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls:
      - secretName: xray-tls
        hosts:
          - xray.domain.com

Useful links