Fix Version/s: None
Some of the current implementation assumes only a single fringe frame. Several places in FringeTask and/or IsrTask may need to be updated to support fringe correction with multiple fringe frames.
The FringeTask code handles multiple fringe frames, but it has never been used in earnest so it may require some tweaking based on real-world experience.
What is missing is code to construct multiple fringe frames (presumably putting a PCA into constructCalibs.py in pipe_drivers), and I/O for multiple fringe frames (I'm not aware of any butler support for reading image cubes).
It's also worth noting that I'm not aware of any pipeline that uses multiple fringe frames in routine operations, nor am I aware of any problems with our current single-mode fringe subtraction that would indicate that it's necessary to use multiple fringe frames, and we reduced of order 1000 y-band HSC exposures recently (though to be fair, we didn't eyeball each of them, and maybe there are shortcomings that we're unaware of, but there is not any evidence of that yet).
Paul Price just out of curiosity, why was this added in the first place if HSC isn't using multiple fringe frames?
I used the PS1 Image Processing Pipeline algorithm for fitting the fringe amplitude, which naturally allows multiple fringe frames. But I've never used it with multiple fringe frames (except for simple tests), and to my knowledge PS1 doesn't use multiple fringe frames either. I heard the CFHT MegaCam team did some tests and found that they get better fringe subtraction using different fringe frames based on the time since dusk or until twilight, but I don't remember having seen the evidence, and it would seem their subtraction algorithm only supports a single fringe frame.
O.K. So to make things more tractable in the near term, would you push back if I suggested making the fringe task explicitly single frame?
I don't see how that would "make things more tractable". Why not add a front end API that is explicitly single frame, and retain the capability in the back-end?
I just mean that we have no data to test this on. I know there is a unit test, but it will not be exercised in CI, so has potential for bitrot. Unless there is a significant driver (i.e. it's algorithmically difficult) I would prefer not to spend time trying to implement a scenario we don't actually use.
I understand that desire, but I also worry about what's going to happen when we see first light and need additional capability quickly. The unit test (which is exercised every time the package is built) will help prevent bitrot. I think the multiple fringe frames are a natural part of the algorithm which will also help prevent bitrot.
Sophie Reed and I worked on this problem for pair programming, using the to-be-reviewed and merged ISR mock code for testing. It does appear that multiple fringe frames do work with the fringe code as it current exists, and although this will need to be formalized after that ticket clears, no major algorithmic code needs to be added.
Done as pair-programming with Sophie Reed.
It seems like FringeTask already handles multiple fringe frames. Do you know of anyplace specifically I should look?