- Kubernetes 1.12+
This chart does the following:
- Deploy PostgreSQL database.
- Deploy Redis.
- Deploy distributor.
- Deploy distribution.
- A running Kubernetes cluster
- Dynamic storage provisioning enabled
- Default StorageClass set to allow services using the default StorageClass for persistent storage
- A running Artifactory Enterprise Plus
- Kubectl installed and setup to use the cluster
- Helm v2 or v3 installed
Before installing JFrog helm charts, you need to add the ChartCenter helm repository to your helm client.
helm repo add center https://repo.chartcenter.io
helm repo update
In order to connect Distribution to your Artifactory installation, you have to use a Join Key, hence it is MANDATORY to provide a Join Key and Jfrog Url to your Distribution installation. Here's how you do that:
Retrieve the connection details of your Artifactory installation, from the UI - https://www.jfrog.com/confluence/display/JFROG/General+Security+Settings#GeneralSecuritySettings-ViewingtheJoinKey.
Provide join key and jfrog url as a parameter to the Distribution chart installation:
helm upgrade --install distribution --set distribution.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY> \
--set distribution.jfrogUrl=<YOUR_PREVIOUSLY_RETIREVED_BASE_URL> --namespace distribution center/jfrog/distribution
Alternatively, you can create a secret containing the join key manually and pass it to the template at install/upgrade time.
# Create a secret containing the key. The key in the secret must be named join-key
kubectl create secret generic my-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
# Pass the created secret to helm
helm upgrade --install distribution --set distribution.joinKeySecretName=my-secret --namespace distribution center/jfrog/distribution
NOTE: In either case, make sure to pass the same join key on all future calls to helm install
and helm upgrade
! This means always passing --set distribution.joinKey=<YOUR_PREVIOUSLY_RETIREVED_JOIN_KEY>
. In the second, this means always passing --set distribution.joinKeySecretName=my-secret
and ensuring the contents of the secret remain unchanged.
Distribution upgrade from 1.x to 2.x (App Version) is not directly supported. For manual upgrade, Please refer here. If this is an upgrade over an existing Distribution 2.x (App Version), explicitly pass --set unifiedUpgradeAllowed=true
to upgrade.
Distribution uses a common system configuration file - system.yaml
. See official documentation on its usage.
In the chart directory, we have added three values files, one for each installation type - small/medium/large. These values files are recommendations for setting resources requests and limits for your installation. The values are derived from the following documentation. You can find them in the corresponding chart directory - values-small.yaml, values-medium.yaml and values-large.yaml
NOTE: It might take a few minutes for Distribution's public IP to become available, and the nodes to complete initial setup. Follow the instructions outputted by the install command to get the Distribution IP and URL to access it.
Once you have a new chart version, you can update your deployment with
helm upgrade distribution center/jfrog/distribution
If distribution was installed without providing a value to redis.password (a password was autogenerated), follow these instructions:
- Get the current password by running:
REDIS_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-redis-secret -o jsonpath="{.data.redis-password}" | base64 --decode)
- Upgrade the release by passing the previously auto-generated secret:
helm upgrade <myrelease> center/jfrog/distribution --set redis.password=${REDIS_PASSWORD}
Alternatively, you can create a secret which is required to have a key with the name redis-password
and pass it to the template at install/upgrade time.
# Create a secret containing the redis password. The key in the secret must be named redis-password
kubectl create secret generic my-secret --from-literal=redis-password=${REDIS_PASSWORD}
# Pass the created secret to helm
helm upgrade --install distribution --set redis.existingSecret=my-secret --namespace distribution center/jfrog/distribution
If distribution was installed without providing a value to postgresql.postgresqlPassword (a password was autogenerated), follow these instructions:
- Get the current password by running:
POSTGRES_PASSWORD=$(kubectl get secret -n <namespace> <myrelease>-postgresql -o jsonpath="{.data.password}" | base64 --decode)
- Upgrade the release by passing the previously auto-generated secret:
helm upgrade <myrelease> center/jfrog/distribution --set postgresql.postgresqlPassword=${POSTGRES_PASSWORD}
JFrog Distribution requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set in values.yaml (distribution.masterKey
).
This key is for demo purpose and should not be used in a production environment!
You should generate a unique one and pass it to the template at install/upgrade time.
# Create a key
export MASTER_KEY=$(openssl rand -hex 32)
echo ${MASTER_KEY}
# Pass the created master key to helm
helm upgrade --install distribution --set distribution.masterKey=${MASTER_KEY} --namespace distribution center/jfrog/distribution
Alternatively, you can create a secret containing the master key manually and pass it to the template at install/upgrade time.
```bash
# Create a secret containing the key. The key in the secret must be named master-key
kubectl create secret generic my-secret --from-literal=master-key=${MASTER_KEY}
# Pass the created secret to helm
helm upgrade --install distribution --set distribution.masterKeySecretName=my-secret --namespace distribution center/jfrog/distribution
NOTE: In either case, make sure to pass the same master key on all future calls to helm install
and helm upgrade
! In the first case, this means always passing --set -n distribution.masterKey=${MASTER_KEY}
. In the second, this means always passing --set -n distribution.masterKeySecretName=my-secret
and ensuring the contents of the secret remain unchanged.
### High Availability
JFrog Distribution can run in High Availability by having multiple replicas of the Distribution service.
To enable this, pass replica count to the `helm install` and `helm upgrade` commands.
```bash
# Run 3 replicas of the Distribution service
helm upgrade --install distribution --set replicaCount=3 --namespace distribution center/jfrog/distribution
For production grade installations it is recommended to use an external PostgreSQL with a static password
There is an option to use an external PostgreSQL database for your Distribution.
To use an external PostgreSQL, You need to set the Distribution PostgreSQL connection details
export POSTGRES_URL=
export POSTGRES_USERNAME=
export POSTGRES_PASSWORD=
helm upgrade --install distribution \
--set database.url=${POSTGRES_URL} \
--set database.user=${POSTGRES_USERNAME} \
--set database.password=${POSTGRES_PASSWORD} \
--namespace distribution center/jfrog/distribution
NOTE: The Database password is saved as a Kubernetes secret
You can use already existing secrets for managing the database connection details.
Pass them to the install command like this
export POSTGRES_USERNAME_SECRET_NAME=
export POSTGRES_USERNAME_SECRET_KEY=
export POSTGRES_PASSWORD_SECRET_NAME=
export POSTGRES_PASSWORD_SECRET_KEY=
helm upgrade --install distribution \
--set database.secrets.user.name=${POSTGRES_USERNAME_SECRET_NAME} \
--set database.secrets.user.key=${POSTGRES_USERNAME_SECRET_KEY} \
--set database.secrets.password.name=${POSTGRES_PASSWORD_SECRET_NAME} \
--set database.secrets.password.key=${POSTGRES_PASSWORD_SECRET_KEY} \
--namespace distribution center/jfrog/distribution
Upgrading Distribution is a simple helm command
helm upgrade distribution center/jfrog/distribution
NOTE: Check for any version specific upgrade nodes in [CHANGELOG.md]
In cases where a new version is not compatible with existing deployed version (look in CHANGELOG.md) you should
- Deploy new version along side old version (set a new release name)
- Copy configurations and data from old deployment to new one (/var/opt/jfrog)
- Update DNS to point to new Distribution service
- Remove old release
This chart provides the option to add sidecars to tail various logs from Distribution containers. See the available values in values.yaml
Get list of containers in the pod
kubectl get pods -n <NAMESPACE> <POD_NAME> -o jsonpath='{.spec.containers[*].name}' | tr ' ' '\n'
View specific log
kubectl logs -n <NAMESPACE> <POD_NAME> -c <LOG_CONTAINER_NAME>
Create trust between the nodes by copying the ca.crt from the Artifactory server under $JFROG_HOME/artifactory/var/etc/access/keys to of the nodes you would like to set trust with under $JFROG_HOME//var/etc/security/keys/trusted. For more details, Please refer here.
To add this certificate to distribution, Create a configmaps.yaml file with the following content:
common:
configMaps: |
ca.crt: |
-----BEGIN CERTIFICATE-----
<certificate content>
-----END CERTIFICATE-----
customVolumeMounts: |
- name: distribution-configmaps
mountPath: /tmp/ca.crt
subPath: ca.crt
distribution:
preStartCommand: "mkdir -p {{ .Values.distribution.persistence.mountPath }}/etc/security/keys/trusted && cp -fv /tmp/ca.crt {{ .Values.distribution.persistence.mountPath }}/etc/security/keys/trusted/ca.crt"
router:
tlsEnabled: true
and use it with you helm install/upgrade:
helm upgrade --install distribution -f configmaps.yaml --namespace distribution center/jfrog/distribution
This will, in turn:
- Create a configMap with the files you specified above
- Create a volume pointing to the configMap with the name
distribution-configmaps
- Mount said configMap onto
/tmp
using acustomVolumeMounts
- Using preStartCommand copy the
ca.crt
file to xray trusted keys folder/etc/security/keys/trusted/ca.crt
router.tlsEnabled
is set to true to add HTTPS scheme in liveness and readiness probes.
There are cases where a special, unsupported init processes is needed like checking something on the file system or testing something before spinning up the main container.
For this, there is a section for writing a custom init container in the values.yaml. By default it's commented out
distribution:
## Add custom init containers
customInitContainers: |
## Init containers template goes here ##
There are cases where you'd like custom files mounted onto your container's file system.
For this, there is a section for defining custom volumes in the vaules.yaml. By default they are left empty. You can mount custom volumes onto both distribution and distributor pods like so:
common:
## Add custom volumes
customVolumes: |
# - name: custom-script
# configMap:
# name: custom-script
distribution:
## Add custom volumeMounts
customVolumeMounts: |
# - name: custom-script
# mountPath: "/scripts/script.sh"
# subPath: script.sh
distributor:
## Add custom volumeMounts
customVolumeMounts: |
# - name: custom-script
# mountPath: "/scripts/script.sh"
# subPath: script.sh