Preparing Ceph System Cluster

In this article:

Installing Ceph

Adding OSD Nodes

Creating a Metadata Server

Deploying Ceph Managers

To install and set up Ceph system cluster:

  1. Install Ceph.

  2. Add OSD nodes.

  3. Create metadata servers.

  4. Deploy Ceph managers.

NOTE. Main cluster nodes will be further designated as kn0, kn1, kn2, work nodes will be designated as kn3, kn4, kn5.

Installing Ceph

To install and set up Ceph:

  1. Install python:

apt-get update && apt-get install -y python python-pip parted

NOTE. All commands must be executed on one node.

  1. Execute commands:

sudo fdisk -l /dev/sdb
sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
sudo mkfs.xfs -f /dev/sdb
sudo fdisk -s /dev/sdb
sudo blkid -o value -s TYPE /dev/sdb

  1. Deploy Ceph first at the kn0 node:

sudo -H pip install ceph-deploy
ceph-deploy new kn0

The ceph.conf file is created in the current directory.

Edit the ceph.conf file according to the file ./fmp_k8s_v1/ceph/ceph.conf included in the distribution package:

[global]
fsid = a997355a-25bc-4749-8f3a-fb07df0f9105 - replace it with value from your file ceph.conf
mon_initial_members = kn0,kn1,kn3 - replace with names of your nodes
mon_host = 10.30.217.17,10.30.217.20,10.30.217.23 - replace it with addresses of your nodes
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# Network address
public network = 10.30.217.0/24 - replace it with your network address
rbd default features = 3

[mon.kn0]
    host = kn0 - replace it with your node name
    mon addr = 10.30.217.20:6789 - replace it with your node address

[mon.kn1]
    host = kn1 - replace it with your node name
    mon addr = 10.30.217.17:6789 - replace it with your node address

[mon.kn3]
    host = kn3 - replace it with your node name
    mon addr = 10.30.217.23:6789 - replace it with your node address

# Added below config
[osd]
osd_journal_size = 512
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 64
osd_pool_default_pgp_num = 64

# Delay for OSD
osd_client_watch_timeout = 15
osd_heartbeat_grace = 20
osd_heartbeat_interval = 5
osd_mon_heartbeat_interval = 15
osd_mon_report_interval = 4

[mon]
mon_osd_min_down_reporters = 1
mon_osd_adjust_heartbeat_grace = false
mon_client_ping_timeout = 15.000000

NOTE. Replace all names and addresses of nodes with your own ones.

  1. Install Ceph packages on all nodes:

ceph-deploy install kn0 kn1 kn2 kn3 kn4 kn5

This will take long time.

  1. Set up monitoring of the kn1 node:

# Node can be specified instead of create-initial
ceph-deploy mon create-initial

This command will create a monitoring role key. To get key, execute the command:

ceph-deploy gatherkeys kn1

Adding OSD Nodes

After Ceph is installed, add OSD roles to the Ceph cluster on all nodes. They will use the /dev/sdb hard drive section to store data and log.

  1. Check if the /dev/sdb drive section is available on work nodes:

ceph-deploy disk list kn3 kn4 kn5

  1. Delete partition tables on work nodes:

ceph-deploy disk zap kn3 /dev/sdb
ceph-deploy disk zap kn4 /dev/sdb
ceph-deploy disk zap kn5 /dev/sdb

This command will delete all data from the /dev/sdb section on OSD Ceph nodes.

  1. Prepare and activate all OSD nodes, make sure that there are no errors:

ceph-deploy osd create kn3 --data /dev/sdb
ceph-deploy osd create kn4 --data /dev/sdb
ceph-deploy osd create kn5 --data /dev/sdb

Example of command execution result:

The last output line should be: Host kn3 is now ready for osd use.
The last output line should be: Host kn4 is now ready for osd use.
The last output line should be: Host kn5 is now ready for osd use.

  1. Send admin keys and configuration to all nodes:

ceph-deploy admin kn0 kn1 kn2 kn3 kn4 kn5

  1. Change key file permissions. Execute the command at all nodes:

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

As a result, a Ceph cluster is created.

IMPORTANT. Add Ceph Manager Service.

To add Ceph Manager Service:

  1. Execute the command on the kn0 (ceph-admin) node:

ceph auth get-or-create mgr.kn1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'

The command will return the mgr keyring value. Copy the value and go to the next step.

  1. Execute the command:

ssh (monitoring role node, e.g. kn1)
    sudo –i
    mkdir /var/lib/ceph/mgr/ceph-kn1

  1. Edit the file vi /var/lib/ceph/mgr/ceph-kn1/keyring. Insert the mgr keyring value to it from Step 1.

  2. Execute the command:

chmod 600 /var/lib/ceph/mgr/ceph-kn1/keyring
chown ceph-ceph –R /var/lib/ceph/mgr/

  1. Start ceph-mgr manager:

sudo systemctl start ceph-mgr.target

  1. Check status of the Ceph cluster:

sudo ceph –s

Example of command execution result:

cluster:
id:     a997355a-25bc-4749-8f3a-fb07df0f9105
health: HEALTH_OK

services:
mon: 3 daemons, quorum kn1,kn3, kn0
mgr: kn1(active)
mds: cephfs-1/1/1 up  {0=kn1=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in

data:
pools:   0 pools, 0 pgs
objects: 0  objects, 0 B
usage:   3.0 GiB used, 147 GiB / 150 GiB avail
pgs:

If the cluster returned HEALTH_OK, the cluster works.

Creating a Metadata Server

The use of CephFS requires at least one metadata server. Execute the command to create it:

ceph-deploy mds create {ceph-node}

Deploying Ceph Managers

A Ceph manager works in the active or standby mode. Deploying additional Ceph managers ensures that if one of Ceph managers fails, other Ceph manager can continue operations without maintenance.

To deploy additional Ceph managers:

  1. Execute the command:

ceph-deploy mgr create node2 node3

Execute the following commands to check Ceph:

sudo ceph -s

sudo ceph df

sudo ceph osd tree

  1. Create a single kube pool for the Kubernetes (k8s) cluster:

sudo ceph osd pool create kube 32 32

Example of command execution result:

>> pool 'kube' created //See ceph documentation about pool configuration, PG and CRUSH to set corresponding values for PG and PGP

  1. Check the number of pool replicates:

$ sudo ceph osd pool get kube size

Example of command execution result:

size: 3

  1. Check the number of pool placement groups:

$ sudo ceph osd pool get kube pg_num

Example of command execution result:

pg_num: 32

  1. Bind the created pool with rbd application to use it as a RADOS device:

$ sudo ceph osd pool application enable kube rbd

Example of command execution result:

>>enabled application 'rbd' on pool 'kube'

  1. Create a separate user for the kube pool and save keys that are required to allow access k8s to the storage:

$ sudo ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'

[client.kube]
       key = AQC6DX1cAJD3LRAAUC2YpA9VSBzTsAAeH30ZrQ==

 

mkdir ./to_etc
sudo ceph auth get-key client.admin > ./to_etc/client.admin
sudo cp ./to_etc/client.admin /etc/ceph/ceph.client.admin.keyring
sudo chmod 600 /etc/ceph/ceph.client.admin.keyring

  1. Convert file contents to the following:

[client.admin]
       key = AQCvWnVciE7pERAAJoIDoxgtkfYKMnoBB9Q6uA==
       caps mds = "allow *"
       caps mgr = "allow *"
       caps mon = "allow *"
       caps osd = "allow *"

 

sudo ceph auth get-key client.kube > ./to_etc/ceph.client
sudo cp ./to_etc/ceph.client  /etc/ceph/ceph.client.kube.keyring
sudo chmod 644 /etc/ceph/ceph.client.kube.keyring

  1. Convert file contents to the following:

[client.kube]
       key = AQCvWnVciE7pERAAJoIDoxgtkfYKMnoBB9Q6uA==
       caps mds = "allow *"
       caps mgr = "allow *"
       caps mon = "allow *"
       caps osd = "allow *"

After executing the operations the Ceph distributed file system is prepared, next prepare Kubernetes cluster.

See also:

Preparing Environment for Foresight Mobile Platform | Preparing Kubernetes Cluster