DIGIT Deployment on Rancher
Prerequisites for Deploying DIGIT on Rancher
Prerequisites
Provision an Rancher Cluster: Ensure that an Rancher cluster is provisioned and running.
Provision an NFS Server: Follow the NFS server provisioning guide. This will be helpful to setup persistent volumes for StatefulSets.
Export Rancher Cluster's Kubeconfig: Copy the kubeconfig file from Rancher user management cluster and gain access of kubeconfig into rancher management cluster machine.
Install Go: Ensure that Go is installed on your system.
Install kubectl: Ensure that kubectl is installed on your system.
Label Public Node: Ensure to have one public node in the cluster for metallb loadbalancer and label the node with
kubectl label node <node-name> deploy-ingress-controller=true
Clone the HCM-DevOps Repository
Clone the repository containing the necessary files for deploying DIGIT on Rancher:
git clone -b rancher-helm git@github.com:egovernments/health-campaign-devops.git
cd deploy-as-code/deployer/
Deploy DIGIT on Rancher
Deploy the DIGIT dependency charts by running the following command:
export KUBECONFIG=<path-of-the-KUBECONFIG>
go run standalone_installer.go
There are few options that needs to be provided before deploying:
Are you good to proceed?: (yes/no)
Please enter the fully qualified path of your kubeconfig file: <path-of-the-KUBECONFIG>
Please enter the cluster context to be used from the avaliable contexts: <Enter the cluster Name>
Which Product would you like to install, Please Select: <Select Health>
Which version of the product would like to install, Select below: <Select health-demo-v1.6>
Select the DIGIT modules that you want to install, choose Exit to complete selection: <Select relevant module to install>
Choose the target env files that are identified from your local configs: <Select egov-demo>
Do you want to preview the k8s manifests before the actual Deployment: <Yes/No>
Are we good to proceed with the actual deployment?: (Yes/No) (This option will be only visible if preview k8s manifests will be selected No)
For configuration of configmaps and secrets or any other configuration related to digit deployment, refer this doc.
MetalLB Deployment on Rancher:
MetalLb will work as on-premise loadbalancer, it will install on the rancher cluster and the helm chart has been included in backbone directory under helm charts.
It will help ingress controller to have external ip assigned with service loadbalancer.
In the cluster we will need one public node (i.e public ip will be enabled on the node) in which metallb controller will be deployed.
Before deploying metallb, Label the node with
deploy-ingress-controller = true
so that the pod will be only deployed on the specific node, as helm chart contains nodeSelector with the above label.For labeling the node use
kubectl label node <node-name> deploy-ingress-controller=true
After the deployment of MetalLb,
L2Advertisement
crd that needs to be configure in order to make metallb work with nginx ingress controller.Configuration for the above crd will look like:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: public-ip-pool
spec:
addresses:
- 192.168.1.127/32 #Change the private ip of the public node
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: public-l2
spec:
ipAddressPools:
- public-ip-pool
To fetch private ip of public node, use
kubectl get nodes -o wide
, use the internal ip of a public node and add in the aboveIPAddressPool
crd.Apply the above manifest, once applied the nginx ingress service will get the external ip assigned.
Take out external ip (i.e public ip) of public node and map the external ip of the node with the domain.
Note:
Deploy metallb controller on public rancher node (with public ip enabled on rancher node)
If the public node restarted, the public/private ip of the node also changed, so private ip needs to be changed at
IPAddressPool
and public ip needs to be mapped again with the domain.