National Dashboard Ingest
Overview
The objective of national-dashboard-ingest service is listed as below.
To provide a one-stop framework for ingesting data regardless of data-source based on configuration.
To create provision for ingest based on module-wise requirement which directly or indirectly require aggregated data ingestion functionality.
Pre-requisites
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Prior Knowledge of PostgresSQL.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior Knowledge of ElasticSearch.
Setup And Key Functionalities
Setup:
Step 1: Define the index name for the module as per your requirement in
module.index.mapping
key present in the configuration here - https://github.com/egovernments/DIGIT-DevOps/blob/master/deploy-as-code/helm/environments/qa.yaml#L364 .Step 2: Define the allowed metrics for the module as per your requirement in
module.fields.mapping
key present in the configuration here - https://github.com/egovernments/DIGIT-DevOps/blob/master/deploy-as-code/helm/environments/qa.yaml#L366Step 3: Define the allowed group-by fields for the module as per your requirement in
module.allowed.groupby.fields.mapping
key present in the configuration here - https://github.com/egovernments/DIGIT-DevOps/blob/master/deploy-as-code/helm/environments/qa.yaml#L368Step 4: Define the master data index name as per your requirement in
master.data.index
key present in the configuration here - https://github.com/egovernments/DIGIT-DevOps/blob/master/deploy-as-code/helm/environments/qa.yaml#L365Step 5: Define the allowed metrics for master data index as per your requirement in
master.module.fields.mapping
key present in the configuration here - https://github.com/egovernments/DIGIT-DevOps/blob/master/deploy-as-code/helm/environments/qa.yaml#L367Step 6: Create kafka connectors for all the modules that have been configured. A sample request for creating trade license national dashboard kafka connector is as follows -
curl --location --request POST 'http://kafka-connect.kafka-cluster:8083/connectors/' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "cms-case-es-sink9128", "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "connection.url": "http://elasticsearch-data-v1.es-cluster:9200/", "type.name": "nss", "topics": "tl-national-dashboard", "key.ignore": true, "schemas.enable": false, "schema.ignore": true, "value.converter.schemas.enable": false, "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "org.apache.kafka.connect.json.JsonConverter", "batch.size": 10, "max.buffered.records": 500, "flush.timeout.ms": 600000, "retry.backoff.ms": 5000, "read.timout.ms": 10000, "linger.ms": 100, "max.in.flight.requests": 2, "errors.log.enable": true, "errors.deadletterqueue.topic.name": "nss-es-failed", "tasks.max": 1 } }'
Step 7: Run the national-dashboard-ingest application along with national-dashboard-ingest-kafka-pipeline.
Definitions:
Config file - A YAML (xyz.yml) file which contains configuration for running national dashboard ingest.
API - A REST endpoint to post data based on the configuration.
Functionality:
When national dashboard ingest metrics API is hit, all the data payload lookup keys are first checked against the db to determine whether they already exist or not. The db table currently being used for storing lookup keys is
nss-ingest-data
.If the record for a given date and area details is not present, the payload is then flattened and pushed to
nss-ingest-keydata
topic.National dashboard ingest kafka pipeline consumer listens on
nss-ingest-keydata
topic and according to the module to which the data belongs to, pushes it to the respective topic defined in themodule.index.mapping
key.Once the national dashboard ingest kafka pipeline pushes data to the respective topic, a kafka-connector then takes the flattened records from that topic and ingests to ElasticSearch.
Deployment Details
Add configs for different modules required for National Dashboard Ingest Service and National Dashboard Kafka Pipeline service.
Deploy the latest version of National Dashboard Ingest and National dashboard kafka pipeline service.
Add Role-Action mapping for API’s.
Integration
Integration Scope
The national dashboard service is used to push aggregated data present in systems and persisting them onto elasticsearch on top of which dashboards can be built for visualizing and analyzing data.
Integration Benefits
Can perform service-specific business logic without impacting the other module.
In the future, if we want to expose the application to citizen then it can be done easily.
Steps to Integration
To integrate, host of national-dashboard-ingest-service module should be overwritten in helm chart.
national-dashboard/metric/_ingest
should be added as the search endpoint for the config added.national-dashboard/masterdata/_ingest
should be added as the search endpoint for the config added.
API Details
URI: The format of the ingest API to be used to ingest data using national-dashboard-ingest is as follows: national-dashboard/metric/_ingest
Body: Body consists of 2 parts: RequestInfo and Data. Data is where the aggregated data to be ingested resides. The keys given under metrics object here are metrics provided in module.fields.mapping
present in the configuration here - https://github.com/egovernments/DIGIT-DevOps/blob/master/deploy-as-code/helm/environments/qa.yaml#L366
Example Ingest Request Body -
{
"RequestInfo": {
"apiId": "asset-services",
"ver": null,
"ts": null,
"action": null,
"did": null,
"key": null,
"msgId": "search with from and to values",
"authToken": "82c7da0d-da73-4c35-8ea7-5b231369b4cd",
"userInfo": {
"id": 41737,
"uuid": "9a81233f-e212-4035-a831-320b70e93b82",
"userName": "NDSS1",
"name": "National Dashboard Viewer",
"mobileNumber": "7777888813",
"emailId": null,
"locale": null,
"type": "EMPLOYEE",