Post-Campaign Infrastructure Optimization
Scaling Down Deployments
scale_down.sh Script
The scale_down.sh
script is designed to scale down specified core services within the egov
namespace to a single replica and scale down all other services to zero replicas. It also records the original replica counts for future scaling operations.
#!/bin/bash
# File to store the replica count for each deployment
replica_info_file="replica_counts.txt"
> "$replica_info_file" # Clear the file before starting
# List of core services to be maintained and scaled down to 1 replica
core_services=(
"egov-accesscontrol"
"egov-enc-service"
"egov-localization"
"egov-location"
"egov-mdms-service"
"egov-user"
"zuul"
"dss-service"
"boundary-service"
)
# Function to save and scale down replicas
scale_down() {
service=$1
namespace=$2
current_replicas=$(kubectl get deployment "$service" -n "$namespace" -o jsonpath='{.spec.replicas}')
echo "$service $current_replicas" >> "$replica_info_file"
kubectl scale deployment "$service" --replicas=1 -n "$namespace"
sleep 1
echo "$service scaled down."
}
# Scale down core services in egov namespace to 1 replica
for service in "${core_services[@]}"; do
echo "Scaling down deployment $service in namespace egov to 1 replica..."
scale_down "$service" "egov"
done
# Scale Kibana in the es-cluster-v8 namespace
scale_down "release-name-kibana" "es-cluster-v8"
# Scale down all other services in the egov namespace to 0 replicas
all_services=$(kubectl get deployments -n egov -o jsonpath='{.items[*].metadata.name}')
for service in $all_services; do
if [[ ! " ${core_services[@]} " =~ " $service " ]]; then
echo "Scaling down deployment $service in namespace egov to 0 replicas..."
scale_down "$service" "egov"
kubectl scale deployment "$service" --replicas=0 -n egov
fi
done
Execution Steps
Ensure Executable Permissions:
chmod +x scale_down.sh
Run the Script:
./scale_down.sh
Scaling Up Deployments
scaleup.sh Script
The scaleup.sh
script restores the replica counts of deployments based on the information saved in replica_counts.txt
.
#!/bin/bash
# File with saved replica counts
replica_info_file="replica_counts.txt"
# Check if the replica info file exists
if [[ ! -f "$replica_info_file" ]]; then
echo "Replica info file not found! Exiting."
exit 1
fi
# Read the file and scale each service to its original replica count
while read -r line; do
service=$(echo "$line" | awk '{print $1}')
replicas=$(echo "$line" | awk '{print $2}')
namespace="egov"
echo "Scaling up deployment $service in namespace $namespace to $replicas replicas..."
kubectl scale deployment "$service" --replicas="$replicas" -n "$namespace"
sleep 1
echo "$service scaled up to $replicas replicas."
done < "$replica_info_file"
Execution Steps
Ensure Executable Permissions:
chmod +x scaleup.sh
Run the Script:
./scaleup.sh
Postgres RDS Termination and Deployment on Kubernetes
Backup and Terminate Existing RDS
Take a Backup of the Existing Postgres RDS:
Use AWS RDS console or CLI to create a final snapshot.
Ensure the snapshot is successfully created before proceeding.
Terminate the Existing Postgres RDS:
Navigate to the RDS console.
Select the Postgres RDS instance.
Choose Delete and follow the prompts to terminate the instance.
Note: Ensure all necessary data is backed up and verified before termination.
Deploy Postgres on Kubernetes Pod
Obtain Existing Database Credentials:
Get the Database Name:
kubectl get cm egov-config -n egov -o jsonpath="{.data.db-name}"
Example Output:
mozhealthuat
Get the Database Username:
kubectl get secret db -n egov -o jsonpath="{.data.username}" | base64 --decode
Example Output:
mozhealthuat
Get the Database Password:
kubectl get secret db -n egov -o jsonpath="{.data.password}" | base64 --decode
Example Output:
[Your_DB_Password]
Update the PostgreSQL Helm Chart and Deploy:
Retrieve PostgreSQL RDS Version from AWS:
Navigate to the AWS RDS console.
Locate the Postgres RDS instance and note the version (e.g.,
12
).
Install PostgreSQL Using Helm:
helm install postgresql bitnami/postgresql \\ --set image.tag=12 \\ --set auth.username=mozhealthuat \\ --set auth.password=mozhealthuat \\ --set auth.database=mozhealthuat \\ --namespace egov
Note: Adjust the
image.tag
,auth.username
,auth.password
, andauth.database
values as per your requirements.
Post Deployment Configuration
Update ConfigMap
After deploying PostgreSQL, update the egov-config
ConfigMap with the new database host and URL.
kubectl patch configmap egov-config -n egov --type='merge' -p '{
"data": {
"db-host": "postgresql.egov.svc.cluster.local",
"db-url": "jdbc:postgresql://postgresql.egov.svc.cluster.local:5432/mozhealthuat"
}
}'
Restart Deployments
To apply the new configurations, restart all deployments in the egov
namespace.
kubectl get deployments -n egov -o name | xargs -n1 kubectl rollout restart -n egov