Details
-
Type:
Story
-
Status: To Do
-
Resolution: Unresolved
-
Fix Version/s: None
-
Component/s: efd
-
Labels:None
-
Story Points:7
-
Epic Link:
-
Team:SQuaRE
-
Urgent?:No
Description
As discussed in DM-30357, we have to analyze the size vs. frequency of the EFD topics to decide which topics need an increased number of partitions to increase the data throughput and reduce latency for these topics.
A few examples of such topics from the recent MTM1M3 tests are:
lsst.sal.MTM1M3.logevent_appliedStaticForces
|
lsst.sal.MTM1M3.pidData
|
lsst.sal.MTM1M3.logevent_appliedElevationForces
|
lsst.sal.MTM1M3.logevent_appliedForces
|
lsst.sal.MTM1M3.logevent_appliedThermalForces
|
lsst.sal.MTM1M3.logevent_appliedBalanceForces
|
lsst.sal.MTM1M3.logevent_appliedAzimuthForces
|
lsst.sal.MTM1M3.logevent_forceActuatorWarning
|
SAL Kafka creates the topics in Kafka with a single partition, but kafka-connect-manager seems the right place to add the topic partition configuration, perhaps as a separate job.
All connectors (replicator, influxdb, jdbc-sink and s3-sink) will benefit from this.
We decided to apply this configuration when the topic is created. kafka-connect-manager could still run a job to enforce that.
Tiago opened
DM-31474to add support to partition configuration per topic on ts_salkafka. Once we have that implemented, we can configure multiple InfluxDB connectors and increase the number of connector tasks to distribute the load.