...
Code Block |
---|
"demandDetails": [ { "id": "77ba1e93-a535-409c-b9d1-a312c409bd45", "demandId": "687c3176-305b-461d-9cec-2fa26a30c88f", "taxHeadMasterCode": "WATERSEWERAGE_CHARGE", "taxAmount": 120, "collectionAmount": 120, "additionalDetails": null, "auditDetails": { "createdBy": "04956309-87cd-4526-b4e6-48123abd4f3d", "lastModifiedBy": "04956309-87cd-4526-b4e6-48123abd4f3d", "createdTime": 1583675275873, "lastModifiedTime": 1583675298705 }, "tenantId": "pb.amritsar" } ], |
...
RoundOff is bill based i.e every time bill is generated round off is adjusted so that payable amount is the whole number. Individual SW_ROUNDOFF in demand detail can be greater than 0.5 but the sum of all SW_ROUNDOFF will always be less than 0.5.
Scheduler for generating the demand:
Description :
For generating the demand for non metered connection we have a feature for generating the demand in batch. The scheduler is responsible for generating the demand based on the tenant.
The scheduler can be hit by scheduler API or we can schedule cron job or we can put config to kubectl which will hit scheduler based on config.
After the scheduler been hit we will search the list of the tenant (city) present in the database.
After getting the tenants we will pickup tenant one by one and generate the demand for that tenant.
We will load the consumer codes for the tenant and push the calculation criteria to Kafka. Calculation criteria contain minimal information (We are not pushing large data to Kafka), calculation criteria contain consumer code and one boolean variable.
After pushing the data into Kafka we are consuming the records based on the batch configuration. Ex:-> if the batch configuration is 50 so we will consume the 50 calculation criteria at a time.
After consuming the record(Calculation criteria) we will process the batch for generating the demand. If the batch is successful so will log the consumer codes which have been processed.
If some records failed in batch so we will push the batch into dead letter batch topic. From the dead letter batch topic, we will process the batch one by one.
If the record is successful we will log the consumer code, If the record is failed so we will push the data into a dead letter single topic.
Dead letter single topic contains information about failure records in Kafka.
Use cases:
If the same job trigger multiple time what will happen?
If the same job triggers multiple times we will process again as mentioned above but at the demand level we will check the demand based on consumer code and billing period, If demand already exists then we will update the demand otherwise we will create the demand.
Are we maintaining success or failure status anywhere?
Currently, we are maintaining the status of failed records in Kafka.
Configuration :
We need to configure the batch size for Kafka consumer. This configuration is for how much data will be processed at a time.
Code Block |
---|
sw.demand.based.batch.size=10 |