Docker image local storage is installed for loading mobile platform images and further installation of Foresight Mobile Platform. One can also install using Foresight Mobile Platform installation from external repository.
To install Docker image local storage, determine on which work node the storage will be located, and execute operations on the first main node from any directory:
Create the registryfmp namespace:
kubectl create ns registryfmp
Generate an encryption certificate to ensure correct work of work node with the local storage:
mkdir -p certs
openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/registry.key \
-addext "subjectAltName = IP:<work node IP address>" \
-x509 -days 3650 -out certs/registry.crt
In the <work node IP address> substitution specify the IP address that will be used as a local storage address.
After executing the operations the files in the certs folder are obtained:
registry.crt. Encryption certificate.
registry.key. Encryption key.
Create a secret to store the generated encryption certificate:
kubectl create secret tls registry-cert \
--cert=certs/registry.crt \
--key=certs/registry.key \
-n registryfmp
Create the PV.yaml file to describe the persistent volume with contents:
apiVersion: v1
kind: PersistentVolume
metadata:
name: registryfmp
labels:
type: local
spec:
storageClassName: longhorn
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <work node name>
csi:
driver: driver.longhorn.io
fsType: ext4
volumeHandle: registryfmp
volumeAttributes:
dataLocality: disabled
fromBackup: ''
fsType: ext4
numberOfReplicas: '3'
staleReplicaTimeout: '30'
persistentVolumeReclaimPolicy: Delete
volumeMode: Filesystem
In the <work node name> substitution set name of the work node, which will contain the persistent volume and start the local storage, for example, k8s-worker1.
Apply the persistent volume configuration using the PV.yaml file:
kubectl apply -f PV.yaml
Create the PVC.yaml file to describe the persistent volume claim with contents:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registryfmp
namespace: registryfmp
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 50Gi
Apply the persistent volume claim configuration using the PVC.yaml file:
kubectl apply -f PVC.yaml
Create the Deployment.yaml file to describe the deployment with contents:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: registry
name: registry
namespace: registryfmp
spec:
replicas: 1
selector:
matchLabels:
run: registry
template:
metadata:
labels:
run: registry
spec:
containers:
- name: registry
image: registry:2
ports:
- containerPort: 5000
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/tls.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/tls.key"
volumeMounts:
- name: registry-certs
mountPath: "/certs"
readOnly: true
- name: registry-data
mountPath: /var/lib/registry
subPath: registry
volumes:
- name: registry-certs
secret:
secretName: registry-cert
- name: registry-data
persistentVolumeClaim:
claimName: registryfmp
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <work node name>
In the <work node name> substitution specify name of the work node specified at Step 4.
Apply the deployment configuration using the Deployment.yaml file:
kubectl apply -f Deployment.yaml
Create the Service.yaml file to describe the network service with contents:
apiVersion: v1
kind: Service
metadata:
name: registryfmp-service
namespace: registryfmp
spec:
type: NodePort
selector:
run: registry
ports:
- name: registryfmp-tcp
protocol: TCP
port: 5000
targetPort: 5000
externalIPs:
- <work node IP address>
In the <work node IP address> substitution specify the IP address specified at Step 2.
Apply the network service configuration using the Service.yaml file:
kubectl apply -f Service.yaml
Check access to the local storage:
curl --cacert certs/registry.crt https://<work node IP address>:5000/v2/_catalog
After executing the request the response should contain the directory contents, for example:
{"repositories":[]}
After executing the operations the Docker image local storage is installed on the work node.
Execute the operations on each cluster node:
Copy the registry.crt encryption certificate to the /usr/local/share/ca-certificates folder:
scp certs/registry.crt root@<node IP address>/usr/local/share/ca-certificates/registry.crt
Refresh the certificate storage:
update-ca-certificates
Restart the Deckhouse container service:
systemctl restart containerd.service
Next, load mobile platform images to the local storage.
See also:
Cluster Configuration | Preparation and Deployment of Fault-Tolerant Cluster Based on Kubernetes | Loading Mobile Platform Images to Local Storage