EKS Upgrade

Preparation

  1. Scale Down Nginx-Ingress Controller

    • Scale down the nginx-ingress controller to zero replicas.

      kubectl scale deployment nginx-ingress-controller --replicas=0 -n <namespace>
  2. Monitor Kafka Lags

    • Monitor Kafka consumer lags until they reach zero to ensure no pending messages.

      kafka-consumer-groups --bootstrap-server <kafka-broker> --describe --group <consumer-group>

      If latest monitoring is available, use Kafka-UI to monitor Kafka consumer lags.

      kubectl port-forward svc/kafka-ui 8080:8080 -n <namespace> # visit http://localhost:8080/kafka-ui to access dashboard
  3. Backup EBS Volumes

    • Take snapshots of the EBS volumes attached to Persistent Volumes (PVs).

      • Kafka

      • Kafka - Infra (if available)

      • Zookeeper (if available)

      • Elasticsearch (data & master)

      • Elasticsearch - Infra (data & master) (if available)

  4. Scale Down Cluster Worker Nodes

    • Scale down the worker nodes from AWS Auto Scaling Groups, to prevent any further activities.

Upgrading

  1. Clone the DIGIT-DevOps repository.

  2. Navigate to the cloned repository and checkout the release-1.28-Kubernetes branch.

  3. Check if the correct aws credentials are configured using aws configure list.
    Else run to aws configure to configure AWS CLI.

  4. Open input.yaml file and fill in the inputs as per the regex mentioned in the comments.

  5. Go to infra-as-code/terraform/sample-aws and run init.go script to enrich different files based on input.yaml.

  6. Update EKS version under variable "kubernetes_version" in variables.tf file and update ami_id under module "eks" "worker_groups" in main.tf.
    Note: ami_id can be fetched using below cmd


    Run below terraform commands to upgrade EKS.

Post Upgrade

  1. Verify Kubeconfigs

    • Confirm that both admin and user kubeconfigs are working as expected. If issues are found, obtain the latest admin kubeconfig and update necessary roles for the user kubeconfig to ensure streamlined access.

  2. Scale Up Worker Nodes

    • Scale up the worker nodes from AWS Auto Scaling Groups & ensure they are successfully attached to the EKS cluster.

  3. Verify Pod Status

    • Check that all pods are up and running.

      In-case of ImagePullBackOff error due for pull limit exceeded, please wait for additional 6-10 Hrs upon which the issue will be resolved on it's own. Refer to official doc for more information.

  4. Check Kafka Consumers

    • Verify Kafka consumers for any irregularities such as negative lag.

      If latest monitoring is available, use Kafka-UI to monitor Kafka consumer lags.

    • In case there is negative lags in any of the consumer, identify the topic within the consumer group & reset the offset to LATEST.

  5. Scale Up Nginx-Ingress Controller

    • Scale up the nginx-ingress controller to required N replicas.

  6. Verify System Health

    • Monitor the overall system health to ensure everything is functioning as expected.