PipelineTask execution should automatically write configs to the data repository.
There are some details about how to do this are still up in the air:
- Should these appear in the QuantumGraph, so they can be consumed as Inputs or InitInputs by downstream tasks? That would be one way to unblock part of
DM-21885, but it is not the only way.
- Should these be per-Run? (Not if we want to support config-heterogeneous QuantumGraphs for processing data from multiple instruments together, as Nate Lust and I have recently discussed.)
- Should these be per-Quantum? (If so, that's a lot of duplication in the typical case.)
DM-21849 may provide good way to make them neither per-Run nor per-Quantum: the execution system will know exactly how many distinct configs it needs for a run, and making them "nonsingular" datasets as described there would let us write them without duplication while associating the same config dataset with multiple quanta.