Steps involved in eGov Deployment - Starter Kit
Create Kubernetes Cluster
Local Development K8S Cluster: Use development support environment for creating Kubernetes Cluster. Eg : Minikube.
Cloud Native K8S Cluster Services: Use Cloud Services to create Kubernetes Cluster. Eg : AWS EKS Service.
On-Premise K8S Cluster: Create your own Kubernetes Cluster with the help of master and worker nodes.
K8s Cluster Requirement
Cloud native kubernetes engine like AKS, or EKS or GKE from AWS or Azure or GCP respectively
6 k8s Nodes- m4large with each 16GB RAM and 4 vCore CPUs
On-Prem/Data Center or custom
1 Bastion- t2micro (Gateway) 2GB RAM 1vCore CPU
3 k8s Master- t2medium 4 GB RAM 2 vCore CPU
6 k8s Nodes with each 16GB RAM and 4 vCore CPUs
Create Production ready Application Service Lists
eGov Platform Services
Frontend - Web Citizen, Employee, etc
- Backbone - Elastic Search, Kafka
Infra Services - Dashboard, Kibana, Telemetry, Logging
DB Setup
Use managed RDS Service with PostGres if you are using AWS, Azure or GCP
Else provision a VM with PostGres DB
Setup CI Repository
Create and configure shared repository for Continuous Push, Commit, etc.Here we are using Gitlab as shared repository.
- Platform services can be forked from https://github.com/egovernments/egov-services
Setup CD Tool
Install and configure continuous deployment tools for automatic build and test. Builds are created from the Build management scripts, written inside InfraOps GitHub Repo. Here we are using Jenkins/Spinnaker as CD Tool.Setup Container Registry
Create central container registry such as AWS EKS or Gitlab Registry or DockerHub or Artifactory. CD Tools will push the container image to the central container registry.
Http Traffic/Routing:
Kubernetes Cluster Provisioning
- Managed Kubernetes Engine:
Choose your cloud provider (Azure, AWS, GCP or your private)
Choose to go with the cloud provider specific Kubernetes Managed Engine like AKS, EKS, GKE.
Follow the Cloud provider specific instruction to create a Kubernetes Cluster (stable version 1.11 and Beyond) with 5 to 6 worker nodes with 16GB RAM and 4 vCore CPU (m4.xlarge)
PostGres DB (Incase of AWS, Azure, GCP use the RDS) and have the DB server and the DB Credentials details.
Provision the disk volumes for Kafka, ES-Cluster and ZooKeeper as per the below baselines and gather the volume ID details.
Install Kubectl on DevOps local machine to interact with the cluster, setup kubeconfig with the allowed user credentials.
AKS on Azure
Sample volumes to be provisioned
2. Private Cloud - Manually setup Kubernetes Cluster:
Create a VPC or Virtual Private Network with multi availability zones
Provision the Linux VMs with any Container Optimised OS (CoreOS, RHEL, Ubuntu, Debian, etc) within the VPC Subnet.
Provision 1 Bastion Host that acts as proxy server to Kubernetes cluster Nodes.
3 Master Nodes with 4 GB RAM 2 vCore CPU
6 worker nodes with 16GB RAM and 4 vCore CPU
PostGres DB (Linux VM)
Provision the disk volumes for Kafka, ES-Cluster and ZooKeeper as per the below baselines and gather the volume ID details.
- Create LoadBalancer or Ingress to talk to Kube API server that routes the external traffic to the services deployed on cluster.
- Setup AuthN & AuthZ.
Install Kubectl on DevOps local machine to interact with the cluster, setup kubeconfig with the allowed user credentials
Useful Step-By-Step Links:
- Installing the Client Tools
- Provisioning Compute Resources
- Provisioning the CA and Generating TLS Certificates
- Generating Kubernetes Configuration Files for Authentication
- Generating the Data Encryption Config and Key
- Bootstrapping the etcd Cluster
- Bootstrapping the Kubernetes Control Plane
- Bootstrapping the Kubernetes Worker Nodes
- Configuring kubectl for Remote Access
- Provisioning Pod Network Routes
Deployment Architecture:
Every code commit is well reviewed and merged to master branch through Pull Requests
Each merge triggers a new CI Pipeline that ensures CodeQuality and CITests before building the artefacts.
Artefact are version controlled and pushed to Artifactory like Nexus.
After successful CI Jenkins bakes the Docker Images with the Latest Artefacts and pushed the newly baked docker image to Docker Repo.
Deployment Pipeline pulls the Image and pushes to the corresponding Env.
Deployment Scripts:
Python based Deployment script that reads the value from the Jinja 2 template and deploys into the cluster.
Each env will have one Jinja Template that will have the definition of services to be deployed, their dependancies like Config, Env, Secrets, DB Credentials, Persistent Volumes, Manifest, Routing Rules, etc..
Cluster/Service Monitoring
- Monitoring
- Prometheus / CloudWatch for node monitoring
- Prometheus for pod level monitoring
- Logging
- Logs are tagged with correlation-id
- Fluent-bit for log scraping
- Kafka for temporary log storage and processing
- Kafka connect to push logs to various sinks
- Elasticsearch [sink] / Kibana for visualizations
- Tracing
- Jaeger for distributed tracing
- Traces are tagged with correlation-id
Mutistate Cluster Orchestration and Management