Introduction
Red Hat Ceph Storage is a high-availability, high-performance storage solution designed for production-grade enterprise workloads. Built on the open-source Ceph project, it delivers enhanced security and scalability. This guide demonstrates how to deploy a Red Hat Ceph cluster in an air-gapped environment, followed by the deployment of both Object (S3-compatible) and Block (RBD) storage solutions.
Our Reference Red Hat Ceph Storage Cluster Architecture
The environment relies on a local bastion host for package management and administrative access.
Our cluster setup has the following configuration and settings:
- 1 x bastion
- 1 x Load Balancer running HAProxy
- 3 x storage nodes (node1,node2,node3) with 50 GiB disks for data storage.
- All running RHEL 9.7
- All are on the same subnet.
Please note:
- We have node1 which will act as a bootstrap node and will also be part of the cluster.
- Our bootstrap node needs root password-less SSH access to the other storage nodes.
- Our bastion node needs
sudoprivileges on all nodes (can be ‘root’ user).
Procedure
Initializing the Red Hat Ceph Storage cluster
Download cephadm and required files
We will be using the cephadm bootstrap utility. Connect to the bastion and download the required packages for deploying RHCS:
[root@bastion ~]# curl --remote-name \
--location https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm
[root@bastion ~]# chmod +x cephadm
Download rhceph-7-rhel9 image:
[root@bastion ~]# podman pull registry.redhat.io/rhceph/rhceph-7-rhel9
[root@bastion ~]# podman save registry.redhat.io/rhceph/rhceph-7-rhel9 > image-rhceph-7-rhel9.tar
Distribute to nodes
Copy the files to the nodes{1,2,3}:
[root@bastion ~]# for node in node1 node2 node3; do scp ~/image-rhceph-7-rhel9.tar root@$node:/root; done
[root@bastion ~]# for node in node1 node2 node3; do scp ~/cephadm root@$node:/usr/bin; done
SSH into the bootstrapping storage node (the one with root password-less SSH access). Load and tag the RHCS image on the node.
[root@bastion ~]# ssh node1
[root@node1 ~]# podman load -i ~/image-rhceph-7-rhel9.tar
[root@node1 ~]# podman tag registry.redhat.io/rhceph/rhceph-7-rhel9:latest localhost/local/rhceph7:latest
Then, oad and tag the RHCS image on each other storage node.
[root@node1 ~]# for node in node2 node3; do
ssh root@$node "podman load -i /root/image-rhceph-7-rhel9.tar && podman tag registry.redhat.io/rhceph/rhceph-7-rhel9:latest localhost/local/rhceph7:latest"; done
Allow insecure registries on the nodes
Next, on each storage node, edit the registries.conf file to allow insecure registries.
[root@node1 ~]# echo "[[registry]]" >> /etc/containers/registries.conf; echo 'location = "localhost"' >> /etc/containers/registries.conf; echo "insecure = true" >> /etc/containers/registries.conf
[root@node1 ~]# echo "[[registry]]" >> /etc/containers/registries.conf; echo 'location = "localhost/local"' >> /etc/containers/registries.conf; echo "insecure = true" >> /etc/containers/registries.conf
[root@node1 ~]# for node in node2 node3; do
ssh root@$node << 'EOF'
cat << 'INNER' >> /etc/containers/registries.conf
[[registry]]
location = "localhost"
insecure = true
[[registry]]
location = "localhost/local"
insecure = true
INNER
EOF
done
Bootstrap the cluster
Now, let’s bootstrap the cluster from node1. Run the following command, replace <value> with your values:
[root@node1 ~]$ cephadm --image localhost/local/rhceph7:latest bootstrap \
--mon-ip <node1-ip> \
--initial-dashboard-user admin \
--initial-dashboard-password <secure-password> \
--allow-fqdn-hostname \
--dashboard-password-noupdate \
--skip-pull \
--allow-mismatched-release
Verifying podmam|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
...
Host looks OK
Cluster fsid: ...
Verifying IP <node1-ip> port 3300
Verifying IP <node1-ip> port 6789
Mon IP <node1-ip> is in CIDR network <node1-ip-network>/24
Pulling container image localhost/local/rhceph7:latest...
Ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
...
Ceph Dashboard is now available at:
URL: https://<node1-ip>:8443/
User: admin
Password: <secure-password>
...
Bootstrap complete.
Once the command has completed, you finished initializing your cluster.
Check Cluster Health
You can enter the cluster using the ‘cephadm shell’ command.
[root@node1 ~]# cephadm shell
[ceph: root@node1 /]# ceph health
HEALTH_OK
NOTE: You might get some warnings and see HEALTH_WARN which is okay to ignore for now.
Ceph Dashboard
If you would like to use the dashboard, use the following command to get the URL:
[ceph: root@node1 /]# ceph mgr services
{
"dashboard": "https://<node1-ip>:8443/",
"prometheus": "http://<node1-ip>:9283/"
}
When you done, exit the ceph shell
[ceph: root@node1 /]# exit
Expanding the cluster
Once we have the cluster up and running, we need to incorporate the other storage nodes into the cluster. For that, we have the next following steps.
The cluster bootstrap process creates a public SSH access key-file at /etc/ceph/ceph.pub
Let’s exit copy the newly created ceph.pub file into node2 and node3:
[root@node1 ~]# for node in node2 node3; \
do ssh-copy-id -f -i /etc/ceph/ceph.pub root@$node; done
Add hosts to the cluster
Enter Ceph shell, and add the hosts to the cluster.
[root@node1 ~]$ cephadm shell
[ceph: root@node1 /]# ceph orch host add node2 <node2-ip>
[ceph: root@node1 /]# ceph orch host add node3 <node3-ip>
Verify the hosts were added
[ceph: root@node1 /]# ceph orch host ls
HOST ADDR LABELS STATUS
node1 <node1-ip> _admin
node2 <node2-ip>
node3 <node3-ip>
3 hosts in cluster
Label your hosts with their corresponding roles.
[ceph: root@node1 /]# for node in node1 node2 node3; \
do ceph orch host label add $node osd,mon; done
Deploy the MON and OSD daemons on the hosts
[ceph: root@node1 /]# ceph orch apply mon --placement="node1 node2 node3"
[ceph: root@node1 /]# ceph orch apply osd --all-available-devices
The above command should take up to a few minutes to deploy the daemons on each node. You should now have successfuly deployed a ceph cluster in your environment!
Review the cluster status
[ceph: root@node1 /]# ceph status
cluster:
id: <...>
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3 (age 21h)
mgr: node1.dkbnrm(active, since 22h), standbys: node2.mjxrvb
osd: 9 osds: 9 up (since 21h), 9 in (since 21h)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 577 KiB
usage: 833 MiB used, 449 GiB / 450 GiB avail
pgs: 1 active+clean
Summery
In this guide, we learned how to deploy a Red Hat Storage Ceph cluster on an air-gapped environment. To advance the cluster’s capabilities further, check out our other guides on How to Deploy RBD storage on a Red Hat Ceph Storage cluster, and How to Deploy Object Storage on Red Hat Ceph Storage Cluster.
Make sure to check out Red Hat Docs for a complete configuration options for your Ceph cluster.