Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-4155

LSST Wcs requirements document

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: afw
    • Story Points:
      10
    • Sprint:
      Alert Production X16 - 03, Alert Production X16 - 5
    • Team:
      Alert Production

      Description

      Based on the information compiled from DM-4153 and DM-4152, prepare a requirements document (LaTeX, with references) describing all of the known Wcs requirements for the various portions of the LSST stack. This document could live in afw or in its own repository, and could potentially become a published technical report/conference proceeding.

        Attachments

          Issue Links

            Activity

            Hide
            Parejkoj John Parejko added a comment -

            Reduced story points, as this work has been split into 5701-5703. Writing up the details will still be necessary.

            Show
            Parejkoj John Parejko added a comment - Reduced story points, as this work has been split into 5701-5703. Writing up the details will still be necessary.
            Hide
            Parejkoj John Parejko added a comment -

            Starting to put text for this into DMTN-010.

            Show
            Parejkoj John Parejko added a comment - Starting to put text for this into DMTN-010.
            Hide
            Parejkoj John Parejko added a comment -

            I've taken the various WCS/transform requirements and added them to the Requirements section of DMTN-010. Please take a look and tell me if my summary correctly captures the relevant points.

            https://github.com/lsst-dm/dmtn-010/blob/tickets/DM-4155/index.rst

            Show
            Parejkoj John Parejko added a comment - I've taken the various WCS/transform requirements and added them to the Requirements section of DMTN-010. Please take a look and tell me if my summary correctly captures the relevant points. https://github.com/lsst-dm/dmtn-010/blob/tickets/DM-4155/index.rst
            Hide
            rowen Russell Owen added a comment - - edited

            This contains a lot of useful information but I think it could use a cleanup pass.

            "most critical requirements":

            I am not convinced that shared serialization with "GWCS" is practical, much less a "critical" requirement. It is certainly a goal. At the minimum we need reasonable compatibility of our FITS images with standard software and image viewers (e.g. iraf, ds9, ginga). Providing an exact WCS is probably impossible, but if so, we can provide a reasonable approximation (as SDSS does).

            I suggest an editing pass on the requirements for clarity. Many of the entries seem unclear to me, like shortand that means more to the person who wrote them than to some other reader. Many of them seem to focus on details of unspoken requirements. Here are a few examples:

            • What does "Mappings between camera coordinates must be entirely interoperable with image->sky transformations" mean?
            • "Compositions should only simplify when it can be done exactly, or when explicitly requested with a bound on accuracy." implies the requirement or goal that compositions should be simplifiable.
            • "Transforms should know their endpoints..." I think the usual term is "domain", and that seems clearer for multi-dimensional transforms.
            • "Distinguish between spherical and Cartesian,..." spherical and Cartesian what? Coordinates?
            • "Likely do not need color/wavelength in WCS": color/wavelength what? I think you mean we probably do not want to correct for wavelength-dependent effects, such as DCR, with our WCS.

            Overall statement of options:

            • These are not fully independent and I'd be happier if that was pointed out somehow – either at the start of this section or while discussing options that overlap (probably the latter). I think saving that information until after the options section leaves it until too late – the reader is likely to be angry or tuned out by then. In particular:
            • option 3 and 4 are a continuum: we could wrap anything from none to all of AST, and we could do it all at once or over time.
            • option 6 can be adopted with options 3/4: we could start with AST (with some additional wrapping or not) and then switch if the rewrite to C++ is successful.

            Regarding option 4: writing our own C++ abstraction layer on AST: I'm not convinced that "Python interface to AST already developed: pyast" is an useful advantage, nor "pyast documentation very sparse" as a disadvantage. Yes if we wrap only a little and adapt PyAST's wrapper code accoringly. No if we wrap a lot, in which case we'll probably replace the Python interface.

            For the record: my personal recommendation is to use AST as it is now (possibly with a bit of wrapping, but not a lot) in parallel with developing a C++ version. Switch to the C++ version if and when it is a success.

            Minor nits and typos:
            "a interface" -> "an interface".
            Yet Another WCS "Standard."" (two double quotes at end)
            pyast -> PyAST
            afw.wcs -> afw.image.Wcs or afw Wcs
            "incure signifcant"
            "e.g. adding quad-double precision for time, better unit support, unclear API" (we don't want to add an unclear API)

            Show
            rowen Russell Owen added a comment - - edited This contains a lot of useful information but I think it could use a cleanup pass. "most critical requirements": I am not convinced that shared serialization with "GWCS" is practical, much less a "critical" requirement. It is certainly a goal. At the minimum we need reasonable compatibility of our FITS images with standard software and image viewers (e.g. iraf, ds9, ginga). Providing an exact WCS is probably impossible, but if so, we can provide a reasonable approximation (as SDSS does). I suggest an editing pass on the requirements for clarity. Many of the entries seem unclear to me, like shortand that means more to the person who wrote them than to some other reader. Many of them seem to focus on details of unspoken requirements. Here are a few examples: What does "Mappings between camera coordinates must be entirely interoperable with image->sky transformations" mean? "Compositions should only simplify when it can be done exactly, or when explicitly requested with a bound on accuracy." implies the requirement or goal that compositions should be simplifiable. "Transforms should know their endpoints..." I think the usual term is "domain", and that seems clearer for multi-dimensional transforms. "Distinguish between spherical and Cartesian,..." spherical and Cartesian what? Coordinates? "Likely do not need color/wavelength in WCS": color/wavelength what? I think you mean we probably do not want to correct for wavelength-dependent effects, such as DCR, with our WCS. Overall statement of options: These are not fully independent and I'd be happier if that was pointed out somehow – either at the start of this section or while discussing options that overlap (probably the latter). I think saving that information until after the options section leaves it until too late – the reader is likely to be angry or tuned out by then. In particular: option 3 and 4 are a continuum: we could wrap anything from none to all of AST, and we could do it all at once or over time. option 6 can be adopted with options 3/4: we could start with AST (with some additional wrapping or not) and then switch if the rewrite to C++ is successful. Regarding option 4: writing our own C++ abstraction layer on AST: I'm not convinced that "Python interface to AST already developed: pyast" is an useful advantage, nor "pyast documentation very sparse" as a disadvantage. Yes if we wrap only a little and adapt PyAST's wrapper code accoringly. No if we wrap a lot, in which case we'll probably replace the Python interface. For the record: my personal recommendation is to use AST as it is now (possibly with a bit of wrapping, but not a lot) in parallel with developing a C++ version. Switch to the C++ version if and when it is a success. Minor nits and typos: "a interface" -> "an interface". Yet Another WCS "Standard."" (two double quotes at end) pyast -> PyAST afw.wcs -> afw.image.Wcs or afw Wcs "incure signifcant" "e.g. adding quad-double precision for time, better unit support, unclear API" (we don't want to add an unclear API)
            Hide
            jbosch Jim Bosch added a comment -

            Parameterizable to compute or provide (at least) first derivatives, to simplify connection wtih XYTransform etc.

            We need to distinguish between derivatives of the transform with respect to the pixel coordinates (needed to compute local affine transforms), and derivatives with respect to parameters (may be useful in interoperability with jointcal, but I'm not at all convinced it should be part of the WCS interface as opposed to an interface on something we use to implement a few specific WCS transform objects).

            And I think none of these are really related to XYTransform; I actually expect XYTransform to be fully replaced by WCS in the future.

            Combined post-ISR CCD, including initial guess from pointing, to feed into our astrometric solver.

            This is really a use-case, not a requirement (and the composability and camera geom interoperability requirements already present are sufficient to support this use case).

            I think we also have a requirement to be able to persist an approximate FITS standard WCS for any more composite transform.

            Distinguish between spherical and Cartesian, to ensure correct geometry.

            Maybe clarify that this is (I assume) an interface that distinguishes between spherical-spherical, spherical-Cartesian, Cartesian-spherical, and Cartesian-Cartesian transforms, allowing geometry libraries that may use different classes for spherical and Cartesian geometry to interoperate correctly. I think this a hard problem, and I'm hoping AST does something sensible that we could learn from. I'm skeptical that any solution from GWCS on this subject will work well for us, both because I suspect they haven't really thought about it enough (because there are no Astropy region libraries yet to demonstrate the tricky cases) and because this could work very differently in dynamically- and statically-typed languages.

            Develop our own ... This seems like an obviously bad choice, given the work that has already gone into AST and GWCS.

            I'm not nearly this pessimistic. If we share a serialization format with GWCS, we would not be producing yet another standard. And I'm not convinced (maybe I'm being naive) that it's that hard to write a C++ library that composes transforms and implements only the concrete transforms we care about (with room to grow in the future). And I think we could actually learn quite a bit from both AST and GWCS that could be applied to a new implementation of the same underlying concepts in C++.

            I think options 3 and 4 are really part of a spectrum. I don't think there's any way we'd just attach raw AST object pointers to e.g. Exposure. I think the real options are between a very thin C++ layer that just forwards everything to AST and a heavier layer that has some of its own composition smarts and could allow us to implement new transforms in C++ without using AST interfaces at all (and slowly phase out AST).

            On working with David Berry:

            Unclear how much LSST guidance would be required to make a long-term supportable, well documented API.

            I think it's actually fairly clear this would require significant effort from LSST as well. This is not a good project for a C++ beginner, and I think LSST would have to provide a significant amount of expertise to put together the base APIs and composition smarts; all I think we'd want to take from AST (in terms of code) would be the implementations of specific transforms. Of course, as with any other option in which we write our own composition system, we'd also want to take quite a bit of wisdom from AST.

            Show
            jbosch Jim Bosch added a comment - Parameterizable to compute or provide (at least) first derivatives, to simplify connection wtih XYTransform etc. We need to distinguish between derivatives of the transform with respect to the pixel coordinates (needed to compute local affine transforms), and derivatives with respect to parameters (may be useful in interoperability with jointcal, but I'm not at all convinced it should be part of the WCS interface as opposed to an interface on something we use to implement a few specific WCS transform objects). And I think none of these are really related to XYTransform ; I actually expect XYTransform to be fully replaced by WCS in the future. Combined post-ISR CCD, including initial guess from pointing, to feed into our astrometric solver. This is really a use-case, not a requirement (and the composability and camera geom interoperability requirements already present are sufficient to support this use case). I think we also have a requirement to be able to persist an approximate FITS standard WCS for any more composite transform. Distinguish between spherical and Cartesian, to ensure correct geometry. Maybe clarify that this is (I assume) an interface that distinguishes between spherical-spherical, spherical-Cartesian, Cartesian-spherical, and Cartesian-Cartesian transforms, allowing geometry libraries that may use different classes for spherical and Cartesian geometry to interoperate correctly. I think this a hard problem, and I'm hoping AST does something sensible that we could learn from. I'm skeptical that any solution from GWCS on this subject will work well for us, both because I suspect they haven't really thought about it enough (because there are no Astropy region libraries yet to demonstrate the tricky cases) and because this could work very differently in dynamically- and statically-typed languages. Develop our own ... This seems like an obviously bad choice, given the work that has already gone into AST and GWCS. I'm not nearly this pessimistic. If we share a serialization format with GWCS, we would not be producing yet another standard. And I'm not convinced (maybe I'm being naive) that it's that hard to write a C++ library that composes transforms and implements only the concrete transforms we care about (with room to grow in the future). And I think we could actually learn quite a bit from both AST and GWCS that could be applied to a new implementation of the same underlying concepts in C++. I think options 3 and 4 are really part of a spectrum. I don't think there's any way we'd just attach raw AST object pointers to e.g. Exposure. I think the real options are between a very thin C++ layer that just forwards everything to AST and a heavier layer that has some of its own composition smarts and could allow us to implement new transforms in C++ without using AST interfaces at all (and slowly phase out AST). On working with David Berry: Unclear how much LSST guidance would be required to make a long-term supportable, well documented API. I think it's actually fairly clear this would require significant effort from LSST as well. This is not a good project for a C++ beginner, and I think LSST would have to provide a significant amount of expertise to put together the base APIs and composition smarts; all I think we'd want to take from AST (in terms of code) would be the implementations of specific transforms. Of course, as with any other option in which we write our own composition system, we'd also want to take quite a bit of wisdom from AST.
            Hide
            krughoff Simon Krughoff added a comment -

            You mention in the requirements that GWCS is looking at using STC2 as the serialization format. Frankly, that makes my head hurt. Can you say a bit here about how they plan to do that and whether LSST would support such a thing? I'm certainly saying this in the context of my experience with STC. Maybe STC2 is far better.

            Persisting groups of composites: You mention this may not be a requirement, but do we have any quantitative requirement on how big the persisted transform objects can be? It seems like they will be << the size of an image in general. If we need full scale pixel grids to represent the pixel distortions, that is a bigger problem, but it's also a different problem (I think). It may be worth dropping the groups of transforms requirement and bring up the pixel grid transforms as a situation where we may need to look at how that fits in our storage requirements.

            You and Russell did some tests to make complex models and the benchmark them. Is it worth having a section on that, or does that go in the recommendations section?

            Option 2: the advantage that of not --> the advantage of not

            Show
            krughoff Simon Krughoff added a comment - You mention in the requirements that GWCS is looking at using STC2 as the serialization format. Frankly, that makes my head hurt. Can you say a bit here about how they plan to do that and whether LSST would support such a thing? I'm certainly saying this in the context of my experience with STC. Maybe STC2 is far better. Persisting groups of composites: You mention this may not be a requirement, but do we have any quantitative requirement on how big the persisted transform objects can be? It seems like they will be << the size of an image in general. If we need full scale pixel grids to represent the pixel distortions, that is a bigger problem, but it's also a different problem (I think). It may be worth dropping the groups of transforms requirement and bring up the pixel grid transforms as a situation where we may need to look at how that fits in our storage requirements. You and Russell did some tests to make complex models and the benchmark them. Is it worth having a section on that, or does that go in the recommendations section? Option 2: the advantage that of not --> the advantage of not
            Hide
            Parejkoj John Parejko added a comment -

            Comments incorporated and merged. I think the performance notes will go into the recommendations section, but I'm not sure there's an obvious place for them here.

            Show
            Parejkoj John Parejko added a comment - Comments incorporated and merged. I think the performance notes will go into the recommendations section, but I'm not sure there's an obvious place for them here.
            Hide
            tjenness Tim Jenness added a comment -

            Regarding STC2, you'll see some commentary in the SPIE paper (DM-5444). We need an interop text format and it seems like our opening gambit should be STC2 since it is attempting to represent the same approach to frames and mappings that we are intending to use.

            Show
            tjenness Tim Jenness added a comment - Regarding STC2, you'll see some commentary in the SPIE paper ( DM-5444 ). We need an interop text format and it seems like our opening gambit should be STC2 since it is attempting to represent the same approach to frames and mappings that we are intending to use.

              People

              • Assignee:
                Parejkoj John Parejko
                Reporter:
                Parejkoj John Parejko
                Reviewers:
                Dominique Boutigny, Jim Bosch, Simon Krughoff
                Watchers:
                Dominique Boutigny, Jim Bosch, John Parejko, Pierre Astier, Russell Owen, Simon Krughoff, Tim Jenness
              • Votes:
                0 Vote for this issue
                Watchers:
                7 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: