It is recommended to execute all commands on Kubernetes (k8s) main nodes under the root user. Due to possible issues with incorrect traffic routing using iptables bypass when RHEL / CentOS 7 is used, make sure that net.bridge.bridge-nf-call-iptables in the sysctl configuration is set to 1.
NOTE. Main cluster nodes will be further designated as kn0, kn1, kn2, work nodes will be designated as kn3, kn4, kn5.
Execute the following at each node:
Make changes to the system as the root user to ensure correct work of networks in k8s:
cat >>/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl –system
apt-get update && apt-get install -y apt-transport-https curl
Add the official Kubernetes GPG key to the system:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Add a Kubernetes repository:
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/kubernetes-xenial main
EOF
Set the value /proc/sys/net/bridge/bridge-nf-call-iptables that is equal to 1 to ensure correct work of CNI (Container Network Interface). To do this, check the current value:
cat /proc/sys/net/bridge/bridge-nf-call-iptables
If the current value is 0, execute the command:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
Initialize the Kubernetes cluster on the kn0 node under the root user:
Go to the home directory of the fmpadmin user: /home/fmpadmin. Unpack the archive with scripts and yaml files on the same controlling node:
% tar –xvzf ./fmp_k8s_v<version number>.tar
Go to the directory with unpacked scripts:
% cd ./ fmp_k8s_v<version number>/
Go to the rke subdirectory:
% cd ./rke
Execute the command:
% ls –lh ./
The list of files in the current directory is displayed:
-rwxr-xr-x 1 root root 388 Mar 26 20:44 fmpclust.yml
-rw-rw-r-- 1 root root 1.3K Mar 26 20:48 ReadMe.txt
-rwxr-xr-x 1 root root 36M Mar 28 14:54 rke
Move or copy the rke file to the /usr/local/bin/ directory:
% mv ./rke /usr/local/bin/
Set permissions for the rke file:
% chmod 755 /usr/local/bin/rke
Edit contents of the fmpclust.yml file according to your configuration:
vi ./fmpclust.yuml
nodes:
- address: m1
user: your user
role: [controlplane,etcd]
- address: m2
user: your user
role: [controlplane,etcd]
- address: m3
user: your user
role: [controlplane,etcd]
- address: w1
user: your user
role: [worker]
- address: w2
user: your user
role: [worker]
- address: w3
user: your user
role: [worker]
services:
kubelet:
extra_binds:
- "/lib/modules:/lib/modules"
extra_args:
node-status-update-frequency: 10s
etcd:
snapshot: true
creation: 6h
retention: 24h
kube-api:
extra_args:
default-not-ready-toleration-seconds: 30
default-unreachable-toleration-seconds: 30
kube-controller:
extra_args:
node-monitor-period: 5s
node-monitor-grace-period: 40s
pod-eviction-timeout: 30s
authentication:
strategy: x509
# sans:
# - "10.99.255.254"
network:
plugin: flannel
NOTE. 10.99.255.254 — common IP address of servers cluster (if applicable); m1, m2, m3 - names of Kubernetes main nodes; w1,w2, w3 - names of Kubernetes work nodes; your user - the user, from whom interaction between nodes is made (fmpadmin).
Deploy and initialize the cluster:
% rke up --config fmpclust.yml
This takes considerable time and all details of cluster deployment stages are displayed in the console.
If everything is correct, the cluster is successfully deployed, and the server console displays the string:
INFO[0103] Finished building Kubernetes cluster successfully
If there are errors, execute rke up with the debug key to display detailed information:
% rke -d up --config fmpclust.yml
After cluster initialization, the kube_config_fmpclust.yml file appears in the current directory next to the fmpclust.yml file. Move or copy the file to the user profile. This will allow the user to interact with the cluster. Execute this operation as the fmpadmin user:
% mkdir ~/.kube
% cd ./ fmp_k8s_v<version name>/rke/
% cp ./kube_config_fmpclust.yml ~/.kube/config
% sudo chown -R fmpadmin:fmpadmin /home/fmpadmin/.kube
As a result, the Kubernetes cluster is installed and initialized. Check its performance:
Check server and k8s client versions:
% kubectl version - -short
Example of command execution result:
Client Version: v1.13.5
Server Version: v1.13.5
Check status of the k8s component:
% kubectl get cs
Example of command execution result:
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
After executing the operations install Rancher.
See also:
Preparing Environment for Foresight Mobile Platform | Installing Rancher