Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Background of the problem:

  1. During the March 2019 peak traffic multiple issues were raised by PMIDC related to long response times of services and few errors (due to a combination of the asynchronous nature of our APIs and the UI not handling these well).
  2. DB CPU utilisation was being maxed out due to long running queries, we received the alerts for the same via AWS RDS SNS.

How did we react:

  1. Analyzed AWS RDS utilisation graphs to derive possible patterns of high utilization, which helped us narrow down the key hour wherein we had the highest utilization.
  2. Based on the metrics above we digged deep to analyze API response times and trace of services using our distributed tracing setup, Jaeger. 
  3. Several APIs were taking ~20-40s with majority of the time being spent querying the database; on analyzing the queries it was clear that they were not running optimally. 

What was done as part of the exercise:

  1. Long running queries were analyzed using Postgres tools, and fixed by adding necessary indices on UAT then on PROD.
  2. Modules were analyzed and indices were added on all commonly searched columns.
  3. Increased monitoring in an attempt to stay on top of such situations.

Impact of the exercise:

      What went well:

  • DB query execution time got improved from ~20-40s to ~2s, this resulted in great performance improvement at every service also as a whole system.
  • Random errors which occurred due to slow asynchronous persistance of data was also resolved.
  • AWS RDS utilisation was no longer hitting 100% and remained well within limits.

      What went wrong:

  • None 

 Retrospective

  • Code review queries thoroughly before moving them to production.
  • Add indices to all commonly searched columns, if possible, automate this.
  • Invest in alerting so that we're aware of such situations and act accordingly rather than depending on our users to bring this to our notice.
  • Analyse pros and cons of turning critical APIs such as billing and collection synchronous.
Start doingStop doingKeep doing
  • Code review queries thoroughly before moving them to production.
  • Add indices to all commonly searched columns, if possible, automate this.
  • Invest in alerting so that we're aware of such situations and can act swiftly rather than depending on our users to bring this to our notice.
  • Invest in database query monitoring at RDS or DB level.
  • Do some sort of basic performance testing to reduce such scenarios on production.
  • DB queries check-in without review.
  • Hurried changes to production.
  • Distributed tracing which has helped us resolve issues quickly.
  • Keep check on DB utilisation.

Action items


  • No labels