Skip to content

Commit

Permalink
consume the ceph images from the ceph/ceph repo
Browse files Browse the repository at this point in the history
Signed-off-by: travisn <[email protected]>
  • Loading branch information
travisn committed Nov 1, 2018
1 parent f5b09a5 commit 906add0
Show file tree
Hide file tree
Showing 9 changed files with 42 additions and 26 deletions.
19 changes: 12 additions & 7 deletions Documentation/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
# see the "Cluster Settings" section below for more details on which image of ceph to run
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
serviceAccount: rook-ceph-cluster
storage:
Expand All @@ -39,7 +40,11 @@ Settings can be specified at the global level to apply to the cluster as a whole
### Cluster Settings

- `cephVersion`: The version information for launching the ceph daemons.
- `image`: The image used for running the ceph daemons. For example, `ceph/ceph:v12.2.7` or `ceph/ceph:v13.2.2`.
- `image`: The image used for running the ceph daemons. For example, `ceph/ceph:v12.2.9-20181026` or `ceph/ceph:v13.2.2-20181023`.
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v13` will be updated each time a new mimic build is released.
Using the `v13` or similar tag is not recommended in production because it may lead to inconsistent versions of the image running across different nodes in the cluster.
- `allowUnsupported`: If `true`, allow an unsupported major version of the Ceph release. Currently only `luminous` and `mimic` are supported, so `nautilus` would require this to be set to `true`. Should be set to `false` in production.
- `dataDirHostPath`: The path on the host ([hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) where config and data should be stored for each of the services. If the directory does not exist, it will be created. Because this directory persists on the host, it will remain after pods are deleted.
- On **Minikube** environments, use `/data/rook`. Minikube boots into a tmpfs but it provides some [directories](https://github.com/kubernetes/minikube/blob/master/docs/persistent_volumes.md) where files can be persisted across reboots. Using one of these directories will ensure that Rook's data and configuration files are persisted and that enough storage space is available.
Expand Down Expand Up @@ -161,7 +166,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
serviceAccount: rook-ceph-cluster
network:
Expand Down Expand Up @@ -192,7 +197,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
serviceAccount: rook-ceph-cluster
network:
Expand Down Expand Up @@ -235,7 +240,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
serviceAccount: rook-ceph-cluster
network:
Expand Down Expand Up @@ -272,7 +277,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
serviceAccount: rook-ceph-cluster
network:
Expand Down Expand Up @@ -317,7 +322,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
serviceAccount: rook-ceph-cluster
# cluster level resource requests/limits configuration
Expand Down
3 changes: 2 additions & 1 deletion Documentation/ceph-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,8 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:13.2.2-20181011
# For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags
image: ceph/ceph:v13.2.2-20181023
dataDirHostPath: /var/lib/rook
dashboard:
enabled: true
Expand Down
4 changes: 3 additions & 1 deletion PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@

## Notable Features

- Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon. The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use image `ceph/ceph:13.2.2`.
- Different versions of Ceph can be orchestrated by Rook. Both Luminous and Mimic are now supported, with Nautilus coming soon.
The version of Ceph is specified in the cluster CRD with the cephVersion.image property. For example, to run Mimic you could use image `ceph/ceph:v13.2.2-20181023`
or any other image found on the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags).
- The `fsType` default for StorageClass examples are now using XFS to bring it in line with Ceph recommendations.
- Ceph OSDs will be automatically updated by the operator when there is a change to the operator version or when the OSD configuration changes. See the [OSD upgrade notes](Documentation/upgrade-patch.md#object-storage-daemons-osds).
- Rook Ceph block storage provisioner can now correctly create erasure coded block images. See [Advanced Example: Erasure Coded Block Storage](Documentation/block.md#advanced-example-erasure-coded-block-storage) for an example usage.
Expand Down
7 changes: 5 additions & 2 deletions cluster/examples/kubernetes/ceph/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,11 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw)
image: ceph/ceph-amd64:v13.2.2-20181011
# The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).
# v12 is luminous, v13 is mimic, and v14 is nautilus.
# RECOMMENDATION: In production, use a specific version tag instead of the general v13 flag, which pulls the latest release and could result in different
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
image: ceph/ceph:v13
# Whether to allow unsupported versions of Ceph. Currently only luminous and mimic are supported.
# After nautilus is released, Rook will be updated to support nautilus.
# Do not set to true in production.
Expand Down
22 changes: 12 additions & 10 deletions design/decouple-ceph-version.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ spec:
The Ceph version is defined under the property `cephVersion` in the Cluster CRD. All Ceph daemon containers launched by the Rook operator will use this image, including the mon, mgr,
osd, rgw, and mds pods. The significance of this approach is that the Rook binary is not included in the daemon containers. All initialization performed by Rook to generate the Ceph config and prepare the daemons must be completed in an [init container](https://github.com/rook/rook/issues/2003). Once the Rook init containers complete their execution, the daemon container will run the Ceph image. The daemon container will no longer have Rook running.

In the following Cluster CRD example, the Ceph version is Mimic 13.2.1.
In the following Cluster CRD example, the Ceph version is Mimic `13.2.2` built on 23 Oct 2018.

```yaml
apiVersion: ceph.rook.io/v1beta1
Expand All @@ -57,7 +57,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v13.2.1
image: ceph/ceph:v13.2.2-20181023
```

### Operator Requirements
Expand Down Expand Up @@ -119,40 +119,42 @@ To allow more control over the upgrade, we define `upgradePolicy` settings. They

The settings in the CRD to accommodate the design include:
- `upgradePolicy.cephVersion`: The version of the image to start applying to the daemons specified in the `components` list.
- `allowUnrecognizedVersion`: If `false`, the operator would refuse to upgrade the Ceph version if it doesn't support or recognize that version. If `true`, Rook would go ahead and blindly set the image version and assume the pod specs should match `unrecognizedMajorVersion`. This would allow testing of upgrade to unreleased versions. The default is `false`.
- `allowUnsupported`: If `false`, the operator would refuse to upgrade the Ceph version if it doesn't support or recognize that version. This would allow testing of upgrade to unreleased versions. The default is `false`.
- `upgradePolicy.components`: A list of daemons or other components that should be upgraded to the version `newCephVersion`. The daemons include `mon`, `osd`, `mgr`, `rgw`, and `mds`. The ordering of the list will be ignored as Rook will only support ordering as it determines necessary for a version. If there are special upgrade actions in the future, they could be named and added to this list.

For example, with the settings below the operator would only upgrade the mons to mimic, while other daemons would remain on luminous. When the admin is ready, he would add more daemons to the list.

```yaml
spec:
cephVersion:
image: ceph/ceph:v12.2.7
allowUnrecognizedVersion: false
image: ceph/ceph:v12.2.9-20181026
allowUnsupported: false
upgradePolicy:
cephVersion:
image: ceph/ceph:v13.2.1
allowUnrecognizedVersion: false
image: ceph/ceph:v13.2.2-20181023
allowUnsupported: false
components:
- mon
```

When the admin is completed with the upgrade or he is ready to allow Rook to complete the full upgrade for all daemons, he would set `cephVersion.image: ceph/ceph:v13.2.1`, and the operator would ignore the `upgradePolicy` since the `cephVersion` and `upgradePolicy.cephVersion` match.
When the admin is completed with the upgrade or he is ready to allow Rook to complete the full upgrade for all daemons, he would set `cephVersion.image: ceph/ceph:v13.2.2`, and the operator would ignore the `upgradePolicy` since the `cephVersion` and `upgradePolicy.cephVersion` match.

If the admin wants to pause or otherwise control the upgrade closely, there are a couple of natural back doors:
- Deleting the operator pod will effectively pause the upgrade. Starting the operator pod up again would resume the upgrade.
- If the admin wants to manually upgrade the daemons, he could stop the operator pod, then set the container image on each of the Deployments (pods) he wants to update. The difficulty with this approach is if there are any changes to the pod specs that are made between versions of the daemons. The admin could update the pod specs manually, but it would be error prone.

#### Developer controls

If a developer wants to test the upgrade from mimic to nautilus, he would first create the cluster based on mimic. Then he would update the crd with the "unrecognized version" attributes in the CRD to specify nautilus such as:
If a developer wants to test the upgrade from mimic to nautilus, he would first create the cluster based on mimic. Then he would update the crd with the "unrecognized version" attribute in the CRD to specify nautilus such as:
```yaml
spec:
cephVersion:
image: ceph/ceph:v14.1.1
allowUnrecognizedVersion: true
allowUnsupported: true
```

Until Nautilus builds are released, the latest Nautilus build can be tested by using the image `ceph/daemon-base:latest-master`.

### Default Version

For backward compatibility, if the `cephVersion` property is not set, the operator will need to internally set a default version of Ceph.
Expand Down
3 changes: 2 additions & 1 deletion images/ceph/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

FROM ceph/ceph-amd64:v13.2.2-20181011
# the latest mimic release is the base image
FROM ceph/ceph:v13

ARG ARCH
ARG TINI_VERSION
Expand Down
2 changes: 1 addition & 1 deletion pkg/apis/ceph.rook.io/v1beta1/version.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ const (
Luminous = "luminous"
Mimic = "mimic"
Nautilus = "nautilus"
DefaultLuminousImage = "ceph/ceph:v12.2.7"
DefaultLuminousImage = "ceph/ceph:v12.2.9-20181026"
)

func VersionAtLeast(version, minimumVersion string) bool {
Expand Down
2 changes: 1 addition & 1 deletion pkg/operator/ceph/cluster/mgr/dashboard_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ func TestStartSecureDashboard(t *testing.T) {
},
}
c := &Cluster{context: &clusterd.Context{Clientset: test.New(3), Executor: executor}, Namespace: "myns",
dashboard: cephv1beta1.DashboardSpec{Enabled: true}, cephVersion: cephv1beta1.CephVersionSpec{Name: cephv1beta1.Mimic, Image: "ceph/ceph:13.2.2"}}
dashboard: cephv1beta1.DashboardSpec{Enabled: true}, cephVersion: cephv1beta1.CephVersionSpec{Name: cephv1beta1.Mimic, Image: "ceph/ceph:v13.2.2"}}
dashboardInitWaitTime = 0
err := c.configureDashboard()
assert.Nil(t, err)
Expand Down
6 changes: 4 additions & 2 deletions tests/framework/installer/ceph_installer.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,10 @@ import (
)

const (
luminousTestImage = "ceph/ceph:v12.2.7"
mimicTestImage = "ceph/ceph:v13.2.2"
// test with the latest luminous build
luminousTestImage = "ceph/ceph:v12"
// test with the latest mimic build
mimicTestImage = "ceph/ceph:v13"
rookOperatorCreatedCrd = "clusters.ceph.rook.io"
helmChartName = "local/rook-ceph"
helmDeployName = "rook-ceph"
Expand Down

0 comments on commit 906add0

Please sign in to comment.