Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Steps involved in eGov Deployment - Starter Kit

  1. Create Kubernetes Cluster

    • Local Development K8S Cluster: Use development support environment for creating Kubernetes Cluster. Eg : Minikube.

    • Cloud Native K8S Cluster Services: Use Cloud Services to create Kubernetes Cluster. Eg : AWS EKS Service.

    • On-Premise K8S Cluster: Create your own Kubernetes Cluster with the help of master and worker nodes.

    • K8s Cluster Requirement

      1. Cloud native kubernetes engine like AKS, or  EKS or GKE from AWS or Azure or GCP respectively

        1. 6 k8s Nodes- m4large with each 16GB RAM and 4 vCore CPUs

      2. On-Prem/Data Center or custom 

        • 1 Bastion- t2micro (Gateway) 2GB RAM 1vCore CPU

        • 3 k8s Master- t2medium 4 GB RAM 2 vCore CPU

        • 6 k8s Nodes with each 16GB RAM and 4 vCore CPUs

  2. Create Production ready Application Service Lists

    • eGov Platform Services 

    • Frontend - Web Citizen, Employee, etc

    • Backbone - Elastic Search, Kafka
    • Infra Services - Dashboard, Kibana, Telemetry, Logging

    • DB Setup

      • Use managed RDS Service with PostGres if you are using AWS, Azure or GCP

      • Else provision a VM with PostGres DB

  3. Setup CI Repository

    Create and configure shared repository for Continuous Push, Commit, etc.Here we are using Gitlab as shared repository.

                            - Platform services can be forked from https://github.com/egovernments/egov-services

  4. Setup CD Tool

    Install and configure continuous deployment tools for automatic build and test. Builds are created from the Build management scripts, written inside InfraOps GitHub Repo. Here we are using Jenkins/Spinnaker as CD Tool.
  5. Setup Container Registry

    Create central container registry such as AWS EKS or Gitlab Registry or DockerHub or Artifactory. CD Tools will push the container image to the central container registry.

                         


Http Traffic/Routing:


  

Kubernetes Cluster Provisioning 

  1. Managed Kubernetes Engine:
  1. Choose your cloud provider (Azure, AWS, GCP or your private)

  2. Choose to go with the cloud provider specific Kubernetes Managed Engine like AKS, EKS, GKE

  3. Follow the Cloud provider specific instruction to create a Kubernetes Cluster (stable version 1.11 and Beyond) with 5 to 6 worker nodes with 16GB RAM and 4 vCore CPU (m4.xlarge)

  4. PostGres DB (Incase of AWS, Azure, GCP use the RDS) and have the DB server and the DB Credentials details.

  5. Provision the disk volumes for Kafka, ES-Cluster and ZooKeeper as per the below baselines and gather the volume ID details.

  6. Install Kubectl on DevOps local machine to interact with the cluster, setup kubeconfig with the allowed user credentials.


AKS on Azure

                           




Sample volumes to be provisioned




      

      2. Private Cloud - Manually setup Kubernetes Cluster:

  1. Create a VPC or Virtual Private Network with multi availability zones 

  2. Provision the Linux VMs with any Container Optimised OS (CoreOS, RHEL, Ubuntu, Debian, etc) within the VPC Subnet.

  3. Provision 1 Bastion Host that acts as proxy server to Kubernetes cluster Nodes.

  4. 3 Master Nodes with 4 GB RAM 2 vCore CPU

  5. 6 worker nodes with 16GB RAM and 4 vCore CPU

  6. PostGres DB (Linux VM)

  7. Provision the disk volumes for Kafka, ES-Cluster and ZooKeeper as per the below baselines and gather the volume ID details.

  8. Create LoadBalancer or Ingress to talk to Kube API server that routes the external traffic to the services deployed on cluster.
  9. Setup AuthN & AuthZ. 
  10. Install Kubectl on DevOps local machine to interact with the cluster, setup kubeconfig with the allowed user credentials


       Useful Step-By-Step Links:    




            




Deployment Architecture:

  • Every code commit is well reviewed and merged to master branch through Pull Requests

  • Each merge triggers a new CI Pipeline that ensures CodeQuality and CITests before building the artefacts.

  • Artefact are version controlled and pushed to Artifactory like Nexus.

  • After successful CI Jenkins bakes the Docker Images with the Latest Artefacts and pushed the newly baked docker image to Docker Repo.

  • Deployment Pipeline pulls the Image and pushes to the corresponding Env.

       Deployment Scripts:

  • Python based Deployment script that reads the value from the Jinja 2 template and deploys into the cluster.       

  • Each env will have one Jinja Template that will have the definition of services to be deployed, their dependancies like Config, Env, Secrets, DB Credentials, Persistent Volumes, Manifest, Routing Rules, etc..

 

      




Cluster/Service Monitoring 


  • Monitoring
    1. Prometheus / CloudWatch for node monitoring
    2. Prometheus for pod level monitoring
  • Logging
    1. Logs are tagged with correlation-id
    2. Fluent-bit for log scraping
    3. Kafka for temporary log storage and processing
    4. Kafka connect to push logs to various sinks
    5. Elasticsearch [sink] / Kibana for visualizations
  • Tracing
    1. Jaeger for distributed tracing
    2. Traces are tagged with correlation-id






Mutistate Cluster Orchestration and Management



              









  • No labels