The Rook add-on creates and manages a Ceph cluster along with a storage class for provisioning PVCs. It also runs the Ceph RGW object store to provide an S3-compatible store in the cluster.
The EKCO add-on is recommended when installing Rook. EKCO is responsible for performing various operations to maintain the health of a Ceph cluster.
spec: rook: version: latest blockDeviceFilter: sd[b-z] cephReplicaCount: 3 isBlockStorageEnabled: true storageClassName: "storage" hostpathRequiresPrivileged: false bypassUpgradeWarning: false
|version||The version of rook to be installed.|
|storageClassName||The name of the StorageClass that will use Rook to provision PVCs.|
|cephReplicaCount||Replication factor of ceph pools. The default is to use the number of nodes in the cluster, up to a maximum of 3.|
|isBlockStorageEnabled||Use block devices instead of the filesystem for storage in the Ceph cluster. This flag will automatically be set to true for version 1.4.3+ because block storage must be enabled for these versions.|
|isSharedFilesystemDisabled||Disable the rook-ceph shared filesystem, reducing CPU and Memory load by no longer needing to schedule several pods. 1.4.3+|
|blockDeviceFilter||Only use block devices matching this regex.|
|hostpathRequiresPrivileged||Runs Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions.|
|bypassUpgradeWarning||Bypass upgrade warning prompt.|
Rook versions 1.4.3 and later require a dedicated block device attached to each node in the cluster. To meet this block storage requirement, you must add an unformatted disk that is used only for Rook to each node. For Rook versions earlier than 1.4.3, a dedicated block device is recommended in production clusters. For disk requirements, see Add-on Directory Disk Space Requirements.
You can enable and disable block storage for Rook versions earlier than 1.4.3 with the
isBlockStorageEnabled field in the kURL spec.
isBlockStorageEnabled field is set to
true, or when using Rook versions 1.4.3 and later, Rook starts an OSD for each discovered disk.
This can result in multiple OSDs running on a single node.
Rook ignores block devices that already have a filesystem on them.
The following provides an example of a kURL spec with block storage enabled for Rook:
spec: rook: version: latest isBlockStorageEnabled: true blockDeviceFilter: sd[b-z]
In the example above, the
isBlockStorageEnabled field is set to
blockDeviceFilter instructs Rook to use only block devices that match the specified regex.
For more information about the available options, see Advanced Install Options above.
The Rook add-on waits for the dedicated disk that you attached to your node before continuing with installation. If you attached a disk to your node, but the installer is waiting at the Rook add-on installation step, see OSD pods are not created on my devices in the Rook documentation for troubleshooting information.
By default, for Rook versions earlier than 1.4.3, the cluster uses the filesystem for Rook storage. However, block storage is recommended for Rook in production clusters. For more information, see Block Storage above.
When using the filesystem for storage, each node in the cluster has a single OSD backed by a directory in
Nodes with a Ceph Monitor also use
Sufficient disk space must be available to
/var/lib/rook for the Ceph Monitors and other configs. For disk requirements, see Add-on Directory Disk Space Requirements.
We recommend a separate partition to prevent a disruption in Ceph's operation as a result of
/var or the root partition running out of space.
Note: All disks used for storage in the cluster should be of similar size. A cluster with large discrepancies in disk size may fail to replicate data to all available nodes.
The Ceph filesystem is supported with version 1.4.3+.
This allows the use of PersistentVolumeClaims with access mode
Set the storage class to
rook-cephfs in the pvc spec to use this feature.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: rook-cephfs
The following additional ports must be open between nodes for multi-node clusters:
|Protocol||Direction||Port Range||Purpose||Used By|
|TCP||Inbound||9090||CSI RBD Plugin Metrics||All|
It is now possible to upgrade multiple minor versions of the Rook add-on at once. This upgrade process will step through minor versions one at a time. For example, upgrades from Rook 1.0.x to 1.5.x will step through Rook versions 1.1.9, 1.2.7, 1.3.11 and 1.4.9 before installing 1.5.x. Upgrades without internet access may prompt the end-user to download supplemental packages.
Alternatively, a Rook upgrade can be triggered independently using the
This task requires the argument
curl https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s rook-upgrade to-version=1.10
Rook upgrades from 1.0.x migrate data off of any hostpath-based OSDs in favor of block device-based OSDs. The upstream Rook project introduced a requirement for block storage in versions 1.3.x and later.
For Rook version 1.9.12 and later, when you install with both the Rook add-on and the Prometheus add-on, kURL enables Ceph metrics collection and creates a Ceph cluster statistics Grafana dashboard.
The Ceph cluster statistics dashboard in Grafana displays metrics that help you monitor the health of the Rook Ceph cluster, including the status of the Ceph object storage daemons (OSDs), the available cluster capacity, the OSD commit and apply latency, and more.
The following shows an example of the Ceph cluster dashboard in Grafana:
To access the Ceph cluster dashboard, log in to Grafana in the
monitoring namespace of the kURL cluster using your Grafana admin credentials.
For more information about installing with the Prometheus add-on and updating the Grafana credentials, see Prometheus Add-on.