I this epic, we investigated different approaches to deploy Kafka and give access to the cluster using both Nodeport and load balancers. We've added features to the kafka-connect-manager to select for instance the SAL timestamp we want to use as the InfluxDB timestamp and reviewed the error policy configuration so that the connector can restart itself if there's a problem with a specific topic. We have tested a new release (1.2.3) of the InfluxDB Sink Connector and found a regression - this version it is not handling arrays properly. We've written a technote for the EFD operation (SQR-034) and tested two new connectors successfully, the Confluent JDBC connector to write data to the Oracle database at LDF and the Replicator connector to replicate both topics and schemas from our source EFD to the aggregator EFD. That is also documented in SQR-034.
I've started playing with Argo CD as an alternative to Terraform for managing the EFD deployments. The GitOps way to synchronize the deployment state (configurations and secrets) from GH with the cluster state is great. I've showed how to integrate Vault with ArgoCD to manage secrets. In a future epic we will finalize the Argo CD work.