Details
-
Type:
Story
-
Status: Done
-
Resolution: Done
-
Fix Version/s: None
-
Component/s: None
-
Labels:None
-
Story Points:4.2
-
Epic Link:
-
Team:SQuaRE
-
Urgent?:No
Description
Confluent Platform exposes Kafka metrics using the Prometheus Java Management Extensions (JMX) exporter.
In this ticket we demonstrate that it is possible to use the Telegraf Prometheus input plugin to collect the JMX metrics and ship them to InfluxDB. Essentially we need to implement the following workflow:
JMX metrics > Prometheus JMX exporter > Telegraf Prometheus input plugin > InfluxDB
From our EFD deployment (Confluent Kafka) we first verified that we can scrape the Prometheus endpoint from the Kafka pods.
kubectl exec -n cp-helm-charts -c cp-schema-registry-server cp-helm-charts-cp-schema-registry-6bc49b7878-p9p5v curl http://localhost:5556/metrics
kubectl exec -n cp-helm-charts -c cp-kafka-connect-server cp-helm-charts-cp-kafka-connect-64899d5484-cfck5 curl http://localhost:5556/metrics
Then we added Telegraf to the EFD deployment:
https://github.com/lsst-sqre/argocd-efd/tree/master/apps/telegraf
We also need a service to access the Prometheus endpoint from the Telegraf pod, for example:
https://github.com/lsst-sqre/argocd-efd/blob/master/apps/telegraf/templates/service-cp-kafka-connect-jmx.yaml
and here's the Telegraf configuration, for example for the summit environment:
https://github.com/lsst-sqre/argocd-efd/blob/master/apps/telegraf/values-summit.yaml