How to install UPYOG on 1.27 EKS Cluster (AWS)

The document assumes that you have the pre-requisites installed and AWS credentials set.


To provision infra and setup UPYOG, follow the below mentioned steps:

  1. Clone the UPYOG-DevOps repository using the below command:

    git clone https://github.com/upyog/UPYOG-DevOps.git

     

  2. Once you clone the repository, cd into UPYOG-DevOps and then checkout UPYOG-Bootcamp branch using the command:

    cd UPYOG-DevOps git checkout release-1.27-kubernetes


    At this step please check if correct credentials are configured using the command:

    aws configure list

    Please make sure that the above command shows the proper AWS credentials which you have set. Please proceed only after confirming it.
    (Refer to this AWS document in case of any doubts on how to set the credentials: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html )

  3. Generate ssh key pairs (Use either method (a) or method (b)).
    a. Using online website (not recommended in prod setup. To be only used for demo setups):
      https://8gwifi.org/sshfunctions.jsp

    b. Using openssl :

     

  4. Add the public key to your github account (reference: https://www.youtube.com/watch?v=9C7_jBn9XJ0&ab_channel=AOSNote )

  5. Open input.yaml file in vscode. You can use the below code to directly open it in VS code:

    If the command does not work you can manually go and open the file in VS code. Once the file is open, fill the inputs. Please make sure the inputs that you add follow the regex mentioned in the comments for that input

    (In case you are not using vscode, you can open it any editor of your choice)

     

  6. Open egov-demo-secret.yaml and add DB password (line number 5), flywayPassword (line number 7) and private key. You can use the following command to open it in VS code:

    Please keep the DB password and flywayPassword same. Private key has to be added inside git-sync key against ssh key (line number 37).

     

  7. Next go to infra-as-code/terraform/sample-aws and run init.go script to enrich different files based on input.yaml. You can run the script using the following command:

     

  8. Now go to remote-state folder and run terraform using the following commands. It will create a S3 bucket and DynamoDB to maintain terraform state.

     

  9. Next cd back to sample-aws folder and run terraform to provision infra for UPYOG. Use the following command:

    (Add the same DB password which you have added in egov-demo-secret.yaml when prompted after running terraform apply)

  10. Execute the following command to generate a kubeConfig file and update the volumeIds, DB URL, and other relevant details in the egov-demo.yaml file.

     

  11. Run the export KUBECONFIG command shown on terminal. (Note: The exact command to run will be printed on terminal. It will be something like this: export KUBECONFIG=<LOCAL_KUBECONFIGPATH> )

     

  12. Next step is deployment of services. Run the digit-installer.go script to install UPYOG using the following command:

     

  13. Once this is done you will get the CNAME of the nginx-ingress-controller using the following command:

    The output of this will be the something like this:
    ae210873da6ff4c03bde2ad22e18fe04-233d3411.ap-south-1.elb.amazonaws.com
    You need to add it in your domain provider against your domain name.

Kubernetes 1.27 Upgrade specific steps:

  1. Install Custom Resource Definitions (CRDs):

    • VolumeSnapshots: Allows users to request and manage snapshots of their persistent volumes.

    • VolumeSnapshotContents: Represents the actual snapshot data in the underlying storage system.

    • VolumeSnapshotClasses: Lets admins define different snapshot profiles or classes (similar to how StorageClasses work for volumes).

  2. Set Up Permissions:

    • The command for rbac-snapshot-controller.yaml sets up Role-Based Access Control (RBAC) permissions. This ensures the snapshot controller has the necessary permissions to operate within the cluster.

  3. Deploy the Snapshot Controller:

    • The snapshot controller is responsible for handling the lifecycle of volume snapshots, from creation to deletion. The command for setup-snapshot-controller.yaml deploys this controller to the cluster.

In essence, these commands prepare your Kubernetes cluster to support volume snapshots, allowing users to take point-in-time snapshots of their data volumes.

Annotate the service account of ebs-csi-controller-sa with node ARN


Ensure that NodeRole has EBSCSIDriverPolicy attached


Amazon EKS uses OIDC to allow Kubernetes to assign AWS IAM roles to specific pods. To Enable OIDC, please use the following steps:

  • Enable OIDC for your EKS Cluster:

    • Go to the EKS section of the AWS Management Console.

    • Select your cluster.

    • Check the cluster details, and you should see an "OpenID Connect URL." If you don't, it means OIDC isn't enabled for this cluster.

  • Associate OIDC Provider with your AWS Account:

    • Go to the IAM section of the AWS Management Console.

    • In the navigation pane, choose "Identity providers", and then choose "Create Provider".

    • For the provider type, choose "OpenID Connect".

    • For the provider URL, use the "OpenID Connect URL" from your EKS cluster details.

    • For the audience, type sts.amazonaws.com and add it.

    • Review the information and create the provider.

      Adjust IAM Role Trust Relationship:

      • In the IAM console, navigate to the node role in the ROLES section

      • Choose the "Trust relationships" tab and then "Edit trust relationship".

      • Ensure that the trust relationship JSON includes a statement that allows the OIDC identity provider. It would look something like this:

      • Replace YOUR_EKS_CLUSTER_ID with the appropriate value from your EKS cluster details.
        The value of EKS_CLUSTER_ID can be obtained from the arn of identity provider name
        arn:aws:iam::ACCOUNT NUMBER:oidc-provider/oidc.eks.ap-south-1.amazonaws.com/id/EKS_CLUSTER

        After running the following steps, you should be able to see your persistent volumes will be attached to the respective statefulsets

        Seed Data Setup:

  1. Import the following postman collection - https://api.postman.com/collections/12892142-55ebe4d0-3869-4879-87e1-5ba3b60cc6b7?access_key=PMAT-01H27R18VPWXP2AE8812P0S12X

  2. Port-forward user pod using the following command -

  3. Hit super_user_creation cURL. This will create a super user with username as GRO and password as eGov@4321.

  4. Now, open the accessToken_generation cURL. The credentials have already been populated. Change "{{YOUR_DOMAIN_NAME}}" placeholder to the domain name defined in input.yaml file while provisioning and hit the cURL.

  5. In the response, you will get "access_token" field. Highlight this value, right click on it and set it as global "token "value.

  6. Once done, you can execute rainmaker common, rainmaker locality, rainmaker PGR localization and PGR workflow cURLs by changing "{{YOUR_DOMAIN_NAME}}" placeholder to the domain name defined in input.yaml file to setup localization and workflow seed data. 

 

Destroying the Cluster:

If you want to destroy(delete) the cluster after demo is done. You can use the following command:

For destroying the remote state bucket, set the lifecycle value to false in main.tf file in the remote-state folder

After that, go to AWS console and empty the S3 bucket. You can then proceed to destroy the remote state bucket using the terraform destroy command.