Preparation and Deployment of Fault-Tolerant Cluster

In this article:

Deployment Architecture Schema

Preparing Nodes for Cluster Deployment

Setting Up HAProxy and Heartbeat

Installing and Setting Up Ceph

Adding OSD Nodes in Ceph Cluster

Creating a Metadata Server

Deploying Ceph Managers

Testing Ceph Storage

Preparing to Install and Initialize Kubernetes (k8s) Cluster

Installing and Setting Up Rancher

Clusters are used to distribute traffic, support databases, store files and business applications in the network. To provide horizontal scaling of all system components, a new approach to cluster deployment is implemented.

Advantages of new approach:

Deployment Architecture Schema

Scheme of providing control cluster part fault-tolerance:

Preparing Nodes for Cluster Deployment

Creating a minimum configuration of fault-tolerant cluster requires five virtual or hardware nodes:

NOTE. Later, main nodes will be marked with the names: kn0, kn1, kn2, working nodes will be marked with the names: kn3, kn4.

To prepare nodes:

  1. Create the fmpadmin user at all nodes who will be used to set and control the future cluster:

sudo useradd -m -d /home/fmpadmin -s /bin/bash fmpadmin
# Set password for fmpadmin user
sudo passwd fmpadmin

Add the fmpadmin user to sudo users to avoid entering password each time on using the sudo command:

echo "fmpadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/fmpadmin
chmod 0440 /etc/sudoers.d/fmpadmin

  1. Disable SWAP section at all nodes because kubeadm does not support working with SWAP:

% sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
% swapoff –all

IMPORTANT. If all cluster nodes are virtual machines controlled with VMware, install the vmware-tools package to each node:
  % sudo apt-get install -y open-vm-tools

  1. Install basic packages and environment of Docker Community Edition:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update

To view the list of Docker Community Edition versions available for installation, execute the command:

sudo apt-cache madison docker-ce

Example of command execution result:

###On each of your machines, install Docker. Version 18.06.2 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well.
### Latest validated Docker version: 18.06
## Install docker ce.

apt-get update && apt-get install docker-ce=18.06.3~ce~3-0~ubuntu

  1. Add the fmpadmin user to the docker group:

sudo usermod -aG docker fmpadmin

  1. Set up the file /etc/hosts identically at all nodes:

127.0.0.1      localhost
<host-ip-address>    kn0.our.cluster kn0
<host-ip-address>    kn1.our.cluster kn1
<host-ip-address>    kn2.our.cluster kn2
<host-ip-address>    kn3.our.cluster kn3
<host-ip-address>    kn4.our.cluster kn4

# The following lines are desirable for IPv6 capable hosts

#::1     localhost ip6-localhost ip6-loopback
#ff02::1 ip6-allnodes
#ff02::2 ip6-allrouters

  1. Generate SSH key for the fmpadmin user at the kn0 node:

    1. Execute the command as the fmpadmin user:

ssh-keygen

The dialog is displayed in the console, for example:

Enter file in which to save the key (/home/user/.ssh/id_rsa):

Press the ENTER key. Then it is prompted to enter a pass phrase for additional protection of SSH connection:

Enter passphrase (empty for no passphrase):

Skip this step and the next step. To do this, press the ENTER key. As a result, a SSH key is created.

    1. Create a configuration file for SSH:

vim ~/.ssh/config

Configuration file contents:

Host kn0
       Hostname kn0
       User fmpadmin
Host kn1
       Hostname kn1
       User fmpadmin
Host kn2
       Hostname kn2
       User fmpadmin
Host kn3
       Hostname kn3
       User fmpadmin
Host kn4
       Hostname kn4
       User fmpadmin

Save changes and exit the editor.

    1. Change file permissions:

chmod 644 ~/.ssh/config

  1. Add the created SSH key to all nodes:

ssh-keyscan kn1 kn2 kn3 kn4 >> ~/.ssh/known_hosts
ssh-copy-id kn1
ssh-copy-id kn2
ssh-copy-id kn3
ssh-copy-id kn4

NOTE. When password is requested, enter the password of the fmpadmin user.

  1. Set up NTP server to synchronize time between the nodes:

sudo apt-get install -y ntp ntpdate
cp /etc/ntp.conf /etc/ntp.conf.orig

At the kn0 node:

cat << EOF > /etc/ntp.conf \
server 127.127.1.0 prefer \
fudge 127.127.1.0 stratum 10 \
interface ignore wildcard \
interface listen <address that will be used for ntp server lan ip> \
EOF
systemctl restart ntp.service

At other nodes:

server <address ntp server lan ip at the kn0 node> iburst
restrict default
interface ignore wildcard
interface listen <your ntp client lan ip>
sudo systemctl restart ntp.service

As a result, the nodes are prepared.

Setting Up HAProxy and Heartbeat

To distribute traffic between three main nodes of Kubernetes, set up the HAProxy load balancer. As a result, three HAProxy server are obtained with a common virtual IP address. Fault-tolerance is ensured with the Heartbeat package.

NOTE. HAProxy is set up at the kn0, kn1 and kn2 main nodes.

To install and set up HAProxy using Heartbeat:

  1. Install and set up HAProxy:

kn0#
kn1# apt-get update && apt-get upgrade && apt-get install -y haproxy
kn2# apt-get update && apt-get upgrade && apt-get install -y haproxy

  1. Save the source configuration and create a new one:

kn1# mv /etc/haproxy/haproxy.cfg{,.back}
kn1# vi /etc/haproxy/haproxy.cfg

  1. Add configuration parameters for HAProxy:

global
   user haproxy
   group haproxy
defaults
   mode http
   log global
   retries 2
   timeout connect 3000ms
   timeout server 5000ms
   timeout client 5000ms
frontend kubernetes
   bind <main cluster ip>:6443
   option tcplog
   mode tcp
   default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
   mode tcp
   balance roundrobin
   option tcp-check
   server kn0 <kn0 main ip>:6443 check fall 3 rise 2
   server kn1 <kn1 main ip>:6443 check fall 3 rise 2
   server kn2 <kn2 main ip>:6443 check fall 3 rise 2

Where <main cluster ip> is a common virtual IP address that will be moved between three HAProxy servers.

  1. To enable binding of system services to a non-local IP address:

    1. Enable the parameter net.ipv4.ip_nonlocal_bind to the file /etc/sysctl.conf:

kn1# vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1

    1. Execute the command:

sysctl -p

    1. Run HAProxy:

kn1# systemctl start haproxy

Make sure that HAProxy are running and listened by virtual IP address at the selected nodes:

kn1# netstat -ntlp
tcp 0 0 <main cluster ip>:6443 0.0.0.0:* LISTEN 2833/haproxy
kn2# netstat -ntlp
tcp 0 0 <main cluster ip>:6443 0.0.0.0:* LISTEN 2833/haproxy

  1. Install Heartbeat and set up virtual IP address:

kn1# apt-get -y install heartbeat && systemctl enable heartbeat

Execute identical installation for the kn0 and kn1 nodes.

  1. Create configuration files for Heartbeat:

    1. Create the file /etc/ha.d/authkeys. This file stores Heartbeat data for mutual authentication. The file must be identical at all nodes:

# echo -n securepass | md5sum
bb77d0d3b3f239fa5db73bdf27b8d29a

kn0# vi /etc/ha.d/authkeys
auth 1

1 md5 bb77d0d3b3f239fa5db73bdf27b8d29a
kn1# vi /etc/ha.d/authkeys

auth 1
1 md5 bb77d0d3b3f239fa5db73bdf27b8d29a

kn2# vi /etc/ha.d/authkeys
auth 1
1 md5 bb77d0d3b3f239fa5db73bdf27b8d29a

The created file must be available only for the root user:

kn0# chmod 600 /etc/ha.d/authkeys
kn1# chmod 600 /etc/ha.d/authkeys
kn2# chmod 600 /etc/ha.d/authkeys

    1. Create the file /etc/ha.d/ha.cf at the kn1 and kn2 nodes. This file stores main Heartbeat configuration. The file will be a little different for each node.

NOTE. To get node parameters for configuration, run uname -n at each of them. Also use the name of your network card instead of ens160.

kn1# vi /etc/ha.d/ha.cf
#       keepalive: how many seconds between heartbeats
#
keepalive 2
#
#       deadtime: seconds-to-declare-host-dead
#
deadtime 10
#
#       What UDP port to use for udp or ppp-udp communication?
#
udpport        694
bcast  ens160
mcast ens160 225.0.0.1 694 1 0
ucast ens160 <main kn1 ip>
#       What interfaces to heartbeat over?
udp     ens160
#
#       Facility to use for syslog()/logger (alternative to log/debugfile)
#
logfacility     local0
#
#       Tell what machines are in the cluster
#       node    nodename ...    -- must match uname -n
node    kn0
node    kn1
node    kn2

kn2# vi /etc/ha.d/ha.cf
#       keepalive: how many seconds between heartbeats
#
keepalive 2
#
#       deadtime: seconds-to-declare-host-dead
#
deadtime 10
#
#       What UDP port to use for udp or ppp-udp communication?
#
udpport        694
bcast  ens160
mcast ens160 225.0.0.1 694 1 0
ucast ens160 <main kn2 ip>
#       What interfaces to heartbeat over?
udp     ens160
#
#       Facility to use for syslog()/logger (alternative to vlog/debugfile)
#
logfacility     local0
#
#       Tell what machines are in the cluster
#       node    nodename ...    -- must match uname -n
node    kn0
node    kn1
node    kn2

    1. Create the file /etc/ha.d/haresources. It contains a common virtual IP address and sets which node is the main one by default. The file must be identical at all nodes:

kn0# vi /etc/ha.d/haresources
kn0 <main cluster ip> ### hereinafter kn0 = hostname of main node by default
kn1# vi /etc/ha.d/haresources
kn0 <main cluster ip>
kn2# vi /etc/ha.d/haresources
kn0 <main cluster ip>   

  1. Start Heartbeat services at all nodes and check if the specified virtual IP address is set at the kn0 node:

kn0# systemctl restart heartbeat
kn1# systemctl restart heartbeat
kn2# systemctl restart heartbeat
kn1# ip a s
ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
   link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
   inet <main kn0 ip>/24
   valid_lft forever preferred_lft forever
   inet <main cluster ip>/24

  1. Check HAProxy performance:

# nc -v <main cluster ip> 6443
Connection to x.x.x.x 6443 port [tcp/*] succeeded!

Timeout must be exceeded because Kubernetes API is not yet listened at the back end. It means that HAProxy and Heartbeat are set up correctly.

Installing and Setting Up Ceph

To install and set up Ceph:

  1. Install python:

apt-get update
apt-get install -y python python-pip

  1. Execute commands:

sudo fdisk -l /dev/sdb
sudo parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
sudo mkfs.xfs -f /dev/sdb
sudo fdisk -s /dev/sdb
sudo blkid -o value -s TYPE /dev/sdb

  1. Deploy Ceph first at the kn0 node:

sudo -H pip install ceph-deploy
ceph-deploy new kn0

Edit the ceph.conf file:

# Our network address
public network = 10.30.217.0/24
osd pool default size = 2

NOTE. Replace 10.30.217.0/24 with your network address.

  1. Install ceph packages at all nodes:

ceph-deploy install kn0 kn1 kn2 kn3 kn4

This will take long time.

  1. Set up monitoring of the kn1 node:

# specify node instead of create-initial
ceph-deploy mon create-initial

This command will create a monitoring role key. To get key, execute the command:

ceph-deploy gatherkeys kn1

Adding OSD Nodes in Ceph Cluster

After Ceph is installed, add OSD nodes to the cluster. They will use the /dev/sdb hard drive section to store data and log.

  1. Check if the /dev/sdb hard drive section available at all nodes:

ceph-deploy disk list kn2 kn3 kn4

  1. Delete section tables at all nodes:

ceph-deploy disk zap kn2 /dev/sdb
ceph-deploy disk zap kn3 /dev/sdb
ceph-deploy disk zap kn4 /dev/sdb

This command will delete all data from the /dev/sdb section of OSD nodes.

  1. Prepare and activate all OSD nodes:

ceph-deploy osd create kn2 --data /dev/sdb
The last output line should be: Host kn2 is now ready for osd use.
ceph-deploy osd create kn3 --data /dev/sdb
The last output line should be: Host kn3 is now ready for osd use.
ceph-deploy osd create kn4 --data /dev/sdb
The last output line should be: Host kn4 is now ready for osd use.

Check sdb disk at OSD nodes:

ceph-deploy disk list kn2 kn3 kn4

  1. Send admin keys and configuration to all nodes:

ceph-deploy admin kn0 kn1 kn2 kn3 kn4

  1. Change key file permissions. Execute the command at all nodes:

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

As a result, a Ceph cluster is created.

Add a Ceph Manager Service:

  1. Execute the command at the kn0 node:

ceph auth get-or-create mgr.kn1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'

The command will return the mgr keyring value. Copy the value and go to the next step.

  1. Execute the command:

ssh kn1 (mon node)
sudo –i
mkdir /var/lib/ceph/mgr/ceph-kn1

Then edit the file vi /var/lib/ceph/mgr/ceph-kn1/keyring. Insert the mgr keyring value from the previous step.

  1. Execute the command:

chmod 600 /var/lib/ceph/mgr/ceph-kn1/keyring
chown ceph-ceph –R /var/lib/ceph/mgr/

  1. Check status of the Ceph cluster:

sudo ceph –s

cluster:
id:     a997355a-25bc-4749-8f3a-fb07df0f9105
health: HEALTH_OK

services:
mon: 1 daemons, quorum kn1
mgr: kn1(active)
osd: 3 osds: 3 up, 3 in

data:
pools:   0 pools, 0 pgs
objects: 0  objects, 0 B
usage:   3.0 GiB used, 147 GiB / 150 GiB avail
pgs:

If the cluster returned HEALTH_OK, the cluster works.

Creating a Metadata Server

The use of CephFS requires at least one metadata server. Execute the command to create it:

ceph-deploy mds create {ceph-node}

Deploying Ceph Managers

A Ceph manager works in the active or standby mode. Deploying additional Ceph managers ensures that if one of Ceph managers fails, other Ceph manager can continue operations without maintenance.

To deploy additional Ceph managers:

  1. Execute the command:

ceph-deploy mgr create kn2 kn3

Execute the following commands to check Ceph:

ceph -s

ceph df

ceph osd tree

  1. Create a single kube pool for the Kubernetes (k8s) cluster:

ceph osd pool create kube 30 30
>> pool 'kube' created

  1. Bind the created pool with RBD application to use it as a RADOS device:

sudo ceph osd pool application enable kube rbd
>>enabled application 'rbd' on pool 'kube'

  1. Create a single user from the pool and save the keys required to ensure k8s access to the storage:

sudo ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'

[client.kube]
       key = AQC6DX1cAJD3LRAAUC2YpA9VSBzTsAAeH30ZrQ==

mkdir ./to_etc
sudo ceph auth get-key client.admin > ./to_etc/client.admin
sudo cp ./to_etc/client.admin /etc/ceph/
sudo ceph auth get-key client.kube > ./to_etc/client.kube
sudo cp ./to_etc/client.kube /etc/ceph/

Testing Ceph Storage

The easiest way to check if a Ceph cluster works correctly and provides storage is to create and use a new RADOS Block Device (RBD) volume at the administrator node. To do this:

  1. Set up RBD functions: disable the RBD functions that are not available at the used Linux kernel. To do this, edit the ceph.conf file:

$ echo "rbd_default_features = 7" | sudo tee -a /etc/ceph/ceph.conf

*rbd_default_features = 7    \\  To enable all features, use: rbd default features = 3

IMPORTANT. Kubernetes uses RBD kernek unit to map RBD with hosts. The Ceph Luminous release requires CRUSH_TUNABLES 5 (Jewel). The minimum kernel version for these settings is 4.5. If your kernel does not support these parameters, execute the command:
ceph osd crush tunables hammer
For details see Ceph documentation, the Create and Initialize the RBD Pool section.

  1. Create an RBD pool:

sudo rbd create --size 1G kube/testvol01 --image-feature layering

rbd create testvol02 --size 1G --pool kube --image-feature layering  ### successfully created testvol2

  1. Make sure that the RBD volume exists:

sudo rbd ls kube

  1. Get information about the RBD volume:

sudo rbd info kube/testvol01

  1. Map the created RBD volume with the administrator node:

sudo rbd map kube/testvol01
sudo rbd map kube/testvol02 ### operation successfully executed
cephuser@kn0:~/cluster$ sudo rbd map kube/testvol02
/dev/rbd0
ssh kn2 sudo rbd map kube/testvol02
ssh kn3 sudo rbd map kube/testvol02
ssh kn4 sudo rbd map kube/testvol02

  1. Create a temporary folder to connect the RBD volume, a file system of the RBD volume and connect the RBD volume to the temporary folder:

sudo mkdir /testkubefs
ssh kn2 sudo mkdir /testkubefs
ssh kn3 sudo mkdir /testkubefs
ssh kn4 sudo mkdir /testkubefs

sudo mkfs.xfs /dev/rbd0   ### execute this command only at the administrator node

sudo mount /dev/rbd0 /testkubefs
ssh kn2 sudo mount /dev/rbd0 /testkubefs
ssh kn3 sudo mount /dev/rbd0 /testkubefs
ssh kn4 sudo mount /dev/rbd0 /testkubefs

  1. Check if everything works:

sudo df -vh | egrep 'Mounted on|/testkubefs'

ssh kn2 sudo df -vh | egrep 'Mounted on|testkubefs'
ssh kn3 sudo df -vh | egrep 'Mounted on|testkubefs'
ssh kn4 sudo df -vh | egrep 'Mounted on|testkubefs'

  1. Remove the RBD volume:

$ sudo umount /dev/rbd0
$ sudo rbd unmap kube/testvol02
$ sudo rbd remove kube/testvol02

$ sudo rmdir /testkubefs

Preparing to Install and Initialize Kubernetes (k8s) Cluster

Execute all commands at main Kubernetes nodes as the root user.

Execute the following at each node:

  1. Make changes to the system as the root user to ensure correct work of networks in k8s:

cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl –system

apt-get update && apt-get install -y apt-transport-https curl

  1. Add the official Kubernetes GPG key to the system:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

  1. Add a Kubernetes repository:

cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/kubernetes-xenial main
EOF

  1. Set the value /proc/sys/net/bridge/bridge-nf-call-iptables that is equal to 1 to ensure correct work of CNI (Container Network Interface). To do this, check the current value:

cat /proc/sys/net/bridge/bridge-nf-call-iptables

If it is equal to 0, execute the command:

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

  1. The kubernetes cluster is initialized at the kn0 node:

The Kubernetes cluster is initialized at the kn0 node as the root user:

  1. Go to the home directory of the fmpadmin user: /home/fmpadmin). Unpack the archive with scripts and yaml files:

% tar –xvzf ./fmp_k8s_v<version number>.tar

  1. Go to the directory with unpacked scripts:

% cd ./ fmp_k8s_v<version number>/

  1. Go to the rke subdirectory:

% cd ./rke

  1. Execute the command:

% ls –lh ./

The list of files in the current directory is displayed:

-rwxr-xr-x 1 root     root       388 Mar 26 20:44 fmpclust.yml
-rwxr-xr-x 1 root     root      36M Mar 28 14:55 helm
-rw-rw-r-- 1 root     root      1.3K Mar 26 20:48 ReadMe.txt
-rwxr-xr-x 1 root     root      36M Mar 28 14:54 rke
-rwxr-xr-x 1 root     root      36M Mar 28 14:55 tiller

  1. Move or copy the file rke, tiller, helm to the directory /usr/local/bin/:

% mv ./rke /usr/local/bin/
% mv ./helm /usr/loca/bin/
% mv ./tiller /usr/local/bin/

  1. Deploy and initialize the cluster:

% rke up --config fmpclust.yml

This takes considerable time and all details of cluster deployment stages are displayed in the console.

If everything is correct, the cluster is successfully deployed, and the console displays the string:

INFO[0103] Finished building Kubernetes cluster successfully

  1. After cluster initialization, the kube_config_fmpclust.yml file appears in the current directory next to the fmpclust.yml file. Move or copy the file to the user profile. This will allow the user to interact with the cluster. Execute this operation as the fmpadmin user:

% mkdir /home/fmpadmin/.kube
% cd ./ fmp_k8s_v<version name>/rke/
% mv ./kube_config_fmpclust.yml /home/fmpadmin/.kube/config
% sudo chown -R fmpadmin:fmpadmin /home/fmpadmin/.kube

As a result, the k8s cluster will be installed and initialized. Check its performance:

  1. Check server and k8s client versions:

% kubectl version - -short

The following is displayed in the console:

Client Version: v1.14.0
Server Version: v1.13.4

  1. Check status of the k8s component:

% kubectl get cs

The following is displayed in the console:

NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}

Installing and Setting Up Rancher

Rancher is a software for convenient k8s cluster management.

To install and set up Rancher:

  1. Create a folder to store Rancher information at the kn4 node:

% sudo mkdir –p /opt/rancher

  1. Start Rancher with mounting to host:

% sudo docker run -d --restart=unless-stopped -p 8180:80 -p 8446:443 -v /opt/rancher:/var/lib/rancher rancher/rancher:stable

  1. Open the browser and open the URL: https://ip-address-of-kn0:8446, where ip-address-of-kn0 is IP address of the kn0 node. Set password for the Rancher administrator role in the web interface:

  1. Add the created cluster to Rancher. Tap the Add Cluster button:

  1. Select addition type. Tap the Import button:

  1. Specify name and description for the added cluster and tap the Create button:

  1. Copy the last command on the page:

General command view:

% curl --insecure -sfL https://<server ip>:8446/v3/import/dtdsrjczmb2bg79r82x9qd8g9gjc8d5fl86drc8m9zhpst2d9h6pfn.yaml | kubectl apply -f -

Where <server ip> is IP address of the node.

Execute the copied command in the console of the kn0 node. The following is displayed in the console:

namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-3035218 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.extensions/cattle-cluster-agent created
daemonset.extensions/cattle-node-agent created

Wait until the system starts:

After the initialization ends, cluster status changes to Active:

General state of clusters:

  1. Go to the section Projects/Namespaces and create a project for the platform:

  1. Tap the Add Project button. Fill in the Project Name box, add description in the Description box and tap the Create button:

  1. Tap the Add Namespace button. Fill in the Name box and tap the Create button:

The setup in the graphic web interface is completed. Go to the nodes console.

  1. Open the console of one of the Kubernetes main nodes as the fmpadmin user. The archive with application container images and the archive with scripts and yaml files used for starting the application in the Kubernetes environment are loaded to the user's home directory.

  2. Load container images to the system:

docker load -i  fmp_v<version number>.tgz

NOTE. Unpack this archive at all cluster nodes.

After the successful import delete the archive.

  1. Unpack the archive with scripts and yaml files at one control node:

% tar –xvzf ./fmp_k8s_v<version number>.tar

  1. Go to the directory with unpacked scripts:

% cd ./ fmp_k8s_v<version number>

  1. Edit contents of the yaml file ./storage/ceph-storage-ceph_rbd.yml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-rbd
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.30.217.17:6789, 10.30.217.20:6789, 10.30.217.23:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: "kube-system"
pool: kube
userId: kube
userSecretName: ceph-secret-kube

In the monitors string specify IP addresses or DNS names of nodes of ceph cluster with specified monitoring role. To view which clusters have this role specified, execute the command:

% sudo ceph -s

The result of command execution, for example:

cluster:
id:     a997355a-25bc-4749-8f3a-fb07df0f9105
health: HEALTH_OK
services:
mon: 3 daemons, quorum kn1,kn0,kn3
mgr: kn3(active), standbys: kn0
osd: 4 osds: 4 up, 4 in

The "mon" string in the "services" section indicates names of the nodes with specified monitoring role.

  1. Add secret-keys of our distributed Ceph file storage in Kubernetes:

    1. Get key of ceph administrator:

% sudo ceph --cluster ceph auth get-key client.admin
>>   AQCvBnVciE8pERAAJoIDoxgtkfYKZnoBS9R6uA==

Copy the key to the clipboard.

    1. Add the obtained key to Kubernetes secrets by explicitly specifying it in the command:

% kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key='AQCvBnVciE8pERAAJoIDoxgtkfYKZnoBS9R6uA==' --namespace=kube-system

Command execution result:

secret/ceph-secret created

    1. Create a single pool for Kubernetes nodes in the Ceph cluster. The created pool will be used in RBD at nodes:

% sudo ceph --cluster ceph osd pool create kube 1024 1024

    1. Create a client key. The cephx authentication is enabled in the Ceph cluster:

% sudo ceph --cluster ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'

    1. Get the client.kube key:

% sudo ceph --cluster ceph auth get-key client.kube

    1. Create a new secret in our project's namespace:

% kubectl create secret generic ceph-secret-kube --type="kubernetes.io/rbd" \
--from-literal=key='AQC6DX1cAJD3LRAAUC2YpA9VSBzTsAAeH30ZrQ==' --namespace=fmpns

  1. Execute scripts in the following order:

    1. fmp_k8s_storage_inst.sh. Execute when a distributed Ceph file storage is used. To set up the Ceph cluster, see above.

    2. fmp_k8s_volumes_inst.sh. It creates and prepares volumes to be used by fmp application.

    3. fmp_k8s_configmap_inst.sh. It creates a variables map.

    4. fmp_k8s_services_inst.sh. It sets up services to enable mutual application components interaction.

    5. fmp_k8s_statefulsets_inst.sh. It sets up conditions to enable database work inside fmp application.

    6. fmp_k8s_deployments_inst.sh. It sets up conditions to start components of fmp application.

Go back to the web interface and make sure that all required objects are created and application containers are running without errors.

  1. Check if the storage is connected. Go to the Storage > Storage Classes section:

  1. Check if persistent volumes are created. Go to the Storage > Persistent Volumes section:

Then select fmp application and go to the Workbooks section to the Volumes tab:

  1. Check if config maps are present that is a list of loaded variables. Go to the Resources section:

  1. Go to the Workbooks section and make sure that elements are created on the tabs:

As a result, a fault-tolerant cluster and Foresight Mobile Platform application are prepared and deployed.

Open the browser and follow the IP address <main cluster ip> specified on HAProxy setup. The login dialog box of Foresight Mobile Platform opens:

See also:

Installing and Setting Up Foresight Mobile Platform