Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Image Added


Cluster Sizing:

  • Base Minimal Services:
    • 50+ Services
      • Core, Dashboard, PGR, PT, TL, Billing, Reports, HRMS, etc.
      • Telemetry, Infra, Kafka, ES, ZooKeeper, Zuul, nginx, etc.
    • 100+ Pods (egov, monitoring, logging, es-cluster, Kafka, backbone)


  • K8s Cluster Requirement
    • 1 Bastion- t2micro (Gateway) 2GB RAM 1vCore CPU
    • 2 k8s Master- t2medium 4 GB RAM 2 vCore CPU
    • 6 k8s Nodes with each 16GB RAM and 4 vCore CPUs



                              OR

    • AKS/EKS/GKE Managed Kubernetes Cluster Engine from AWS or Azure or GCP
    • 6 k8s Nodes- m4large with each 16GB RAM and 4 vCore CPUs

Kubernetes Cluster Provisioning 

  1. Managed Kubernetes Engine:
  1. Choose your cloud provider (Azure, AWS, GCP or your private)

       Managed Kubernetes Engine:

  1. Choose to go with the cloud provider specific Kubernetes Managed Engine like AKS, EKS, GKE

    or provision the VMs manually as per the below Cluster requirements requirements.If AKS, EKS or GKE just follow

  2. Follow the Cloud provider specific instruction to create a Kubernetes Cluster (stable version 1.11)

    and

    with

    6 Nodes.

    5 to 6 worker nodes with 16GB RAM and 4 vCore CPU (m4.xlarge)

  3. PostGres DB (Incase of AWS, Azure, GCP use the RDS) and have the DB server and the DB Credentials details.

  4. Provision the disk volumes for Kafka, ES-Cluster and ZooKeeper as per the below baselines and gather the volume ID details.

  5. Install Kubectl on DevOps local machine to interact with the cluster, setup kubeconfig with the allowed user credentials

    1. Image Modified

      

      2. Private Cloud - Manually setup Kubernetes Cluster

...

:

  1. Create a VPC or Virtual Private Network with multi availability zones 

  2. Provision the Linux VMs with any Container Optimised OS (CoreOS, RHEL, Ubuntu, Debian, etc)

    Create a VPC or Virtual Private Network

    within the VPC Subnet.

  3. Provision 1 Bastion Host that acts as proxy server to Kubernetes cluster Nodes.

  4. 2 Master Nodes

    6 Worker Nodes.

    with 4 GB RAM 2 vCore CPU

  5. 6 worker nodes with 16GB RAM and 4 vCore CPU

  6. PostGres DB (

    Incase of AWS, Azure, GCP use the RDS)

...

  1. Linux VM)

  2. Provision the disk volumes for Kafka, ES-Cluster and ZooKeeper as per the below baselines and gather the volume ID details.

  3. Create LoadBalancer or Ingress to talk to Kube API server that routes the external traffic to the services deployed on cluster.
  4. Setup AuthN & AuthZ. 
  5. Install Kubectl on DevOps local machine to interact with the cluster, setup kubeconfig with the allowed user credentials


Image AddedImage Added




Deployment Architecture:

  • Every code commit is well reviewed and merged to master branch through Pull Requests

  • Each merge triggers a new CI Pipeline that ensures CodeQuality and CITests before building the artefacts.

  • Artefact are version controlled and pushed to Artifactory like Nexus.

  • After successful CI Jenkins bakes the Docker Images with the Latest Artefacts and pushed the newly baked docker image to Docker Repo.

  • Deployment Pipeline pulls the Image and pushes to the corresponding Env.

       Deployment Scripts:

  • Python based Deployment script that reads the value from the Jinja 2 template and deploys into the cluster.       

  • Each env will have one Jinja Template that will have the definition of services to be deployed, their dependancies like Config, Env, Secrets, DB Credentials, Persistent Volumes, Manifest, Routing Rules, etc..

 

 Image Added




Image Added