Paul Price, I have on the branches for this ticket some new butler datasets ("transmission_optics", "transmission_filter", and "transmission_sensor"), and a script to add the HSC version of these to a data repository (along with some code to combine them and attach them to Exposures in ISR, but that's not relevant right now).
The problem with all of this is that these are regular butler datasets, not calibration datasets, and they really ought to be calibration datasets - even if we only have one validity period for all of them right now, we'll need another one for at least the post-recoating mirror reflectivities soon. I know next to nothing about how the calibration-generation side of things works; could you take a quick look at my code and give me a quick walkthrough on how to convert it to create calibration products? If it's easier, you're also welcome to just do it yourself (on this branch, in fact) - I'm using tickets/DM-12366 for integrating all of this and I have a copy of all of these commits there that I can use to work on the coaddition part of this in the meantime.
Merlin Fisher-Levine and Robert Lupton may have thoughts on this as well, though I expect the scripts for creating transmission curve data products in LSST will look nothing like the approach taken here (which can be characterized as, "Robert writes a notebook to understand Kawanomoto-san's numbers, Eli copies dicts full of numbers from Robert's notebook into FGCM and figures out to interpolate them, Jim copies Eli's Python module into obs_subaru, writes a script to call Butler.put, and asks Paul for help).
Paul Price, I have on the branches for this ticket some new butler datasets ("transmission_optics", "transmission_filter", and "transmission_sensor"), and a script to add the HSC version of these to a data repository (along with some code to combine them and attach them to Exposures in ISR, but that's not relevant right now).
The problem with all of this is that these are regular butler datasets, not calibration datasets, and they really ought to be calibration datasets - even if we only have one validity period for all of them right now, we'll need another one for at least the post-recoating mirror reflectivities soon. I know next to nothing about how the calibration-generation side of things works; could you take a quick look at my code and give me a quick walkthrough on how to convert it to create calibration products? If it's easier, you're also welcome to just do it yourself (on this branch, in fact) - I'm using tickets/
DM-12366for integrating all of this and I have a copy of all of these commits there that I can use to work on the coaddition part of this in the meantime.Merlin Fisher-Levine and Robert Lupton may have thoughts on this as well, though I expect the scripts for creating transmission curve data products in LSST will look nothing like the approach taken here (which can be characterized as, "Robert writes a notebook to understand Kawanomoto-san's numbers, Eli copies dicts full of numbers from Robert's notebook into FGCM and figures out to interpolate them, Jim copies Eli's Python module into obs_subaru, writes a script to call Butler.put, and asks Paul for help).