Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-11377

Prepare for AP Verification Metrics session at LSST 2017 Meeting

    Details

    • Story Points:
      5
    • Sprint:
      Alert Production F17 - 8
    • Team:
      Alert Production

      Description

      There is a 90 minute session at LSST 2017 about AP Verification Metrics. This ticket is to plan and prepare for the session, which includes

      • determining how to structure the session; meeting organizers sent suggestions
      • inviting relevant non-AP staff to attend (esp. SQUARE)
      • preparing relevant talks and/or demos
      • aggregating and summarizing feedback after the session

        Attachments

          Activity

          Hide
          mrawls Meredith Rawls added a comment -

          How we plan to structure the session

          • 3:30-4: Meredith Rawls presents the current state of ap_verify, reviews some of the metrics we have already brainstormed, and shares some of the issues we're currently working through (this time period should include an initial Q&A)
          • 4-5: Open discussion moderated by Eric Bellm. Main themes: How we want the system to evolve going forward, refining our starter list of desired metrics, edge cases of lsst.verify we are running into, how our approach fits into future plans for DM stack development (e.g., SuperTask), the varying needs of different end users (e.g., just ap_pipe vs. the full ap_verify).

          Still needed:

          • somebody to take notes
          • a plan for which laptop is used for what (slides, BlueJeans?, note-taking)
          • a list of people we want to invite/highly encourage to attend
          • input from other discussions during the week prior to the session
          Show
          mrawls Meredith Rawls added a comment - How we plan to structure the session 3:30-4: Meredith Rawls presents the current state of ap_verify , reviews some of the metrics we have already brainstormed, and shares some of the issues we're currently working through (this time period should include an initial Q&A) 4-5: Open discussion moderated by Eric Bellm . Main themes: How we want the system to evolve going forward, refining our starter list of desired metrics, edge cases of lsst.verify we are running into, how our approach fits into future plans for DM stack development (e.g., SuperTask), the varying needs of different end users (e.g., just ap_pipe vs. the full ap_verify ). Still needed: somebody to take notes a plan for which laptop is used for what (slides, BlueJeans?, note-taking) a list of people we want to invite/highly encourage to attend input from other discussions during the week prior to the session
          Hide
          mrawls Meredith Rawls added a comment -

          The session was late in the afternoon on the last day, so the energy level was low, but there were some constructive discussions. I believe Eric Bellm took some notes? It would be good to talk through and/or write down what the main takeaways from the session were to close this ticket.

          Show
          mrawls Meredith Rawls added a comment - The session was late in the afternoon on the last day, so the energy level was low, but there were some constructive discussions. I believe Eric Bellm took some notes? It would be good to talk through and/or write down what the main takeaways from the session were to close this ticket.
          Hide
          ebellm Eric Bellm added a comment -

          An incomplete summary of the wide-ranging discussion:

          Meredith Rawls presented an overview of the work to date. Her slides are available here (must log in).

          Re-arranging my notes to be topical rather than chronological:

          Metrics:

          There were a range of suggestions for improvements for some of the example metrics the L1 group had brainstormed. Among them:

          using rho statistics: 2-point correlation function on holdout fraction

          Michael Wood-Vasey suggested doing forced photometry at a fixed blank sky location; Robert Lupton says this is implemented on direct imaging—SkyObjects.

          measuring chi-squared of the residuals at the positions of known stars in an appropriate magnitude range (propagating the correlation matrix to get sigma right)

          ap_verify:

          There was general agreement that ultimately ap_verify needs to use stack-built (rather than Community Pipeline) calibrations, for self-consistency.

          Robert Lupton asked whether ingestion should be part of ap_verify. There was also discussion of whether we should be testing if calibration products are appropriate (which is really the job of the calibration team) or are simply being applied correctly.

          The DRP team (Robert Lupton, Lauren MacArthur, and others) discussed the HSC QA workflows. The HSC pipeline stores relevant metadata in the production database for traceability. QA workflows are handled by annotating catalog products, which are then separately processed for QA.

          lsst.verify, SQUASH, etc.:

          Robert Lupton suggested that common metrics code could live in afw; Michael Wood-Vasey wondered if SRD-verifying KPM code should live somewhere distinct.

          We discussed whether metrics should be put in the metadata. Currently PropertySet doesn’t support lsst.verify.Measurements classes, but there was some discussion that it might be worth extending Task to enable this. It was unclear whether it made more sense to wait for SuperTask.

          There was suggestion that the SQUASH system should focused on continuous integration on small datasets to watch for regressions, and larger verification and/or QA/drill-down workflows should be handled differently.

          Show
          ebellm Eric Bellm added a comment - An incomplete summary of the wide-ranging discussion: Meredith Rawls presented an overview of the work to date. Her slides are available here (must log in). Re-arranging my notes to be topical rather than chronological: Metrics: There were a range of suggestions for improvements for some of the example metrics the L1 group had brainstormed. Among them: using rho statistics: 2-point correlation function on holdout fraction Michael Wood-Vasey suggested doing forced photometry at a fixed blank sky location; Robert Lupton says this is implemented on direct imaging—SkyObjects. measuring chi-squared of the residuals at the positions of known stars in an appropriate magnitude range (propagating the correlation matrix to get sigma right) ap_verify: There was general agreement that ultimately ap_verify needs to use stack-built (rather than Community Pipeline) calibrations, for self-consistency. Robert Lupton asked whether ingestion should be part of ap_verify . There was also discussion of whether we should be testing if calibration products are appropriate (which is really the job of the calibration team) or are simply being applied correctly. The DRP team ( Robert Lupton , Lauren MacArthur , and others) discussed the HSC QA workflows. The HSC pipeline stores relevant metadata in the production database for traceability. QA workflows are handled by annotating catalog products, which are then separately processed for QA. lsst.verify, SQUASH, etc.: Robert Lupton suggested that common metrics code could live in afw ; Michael Wood-Vasey wondered if SRD-verifying KPM code should live somewhere distinct. We discussed whether metrics should be put in the metadata. Currently PropertySet doesn’t support lsst.verify.Measurements classes, but there was some discussion that it might be worth extending Task to enable this. It was unclear whether it made more sense to wait for SuperTask. There was suggestion that the SQUASH system should focused on continuous integration on small datasets to watch for regressions, and larger verification and/or QA/drill-down workflows should be handled differently.

            People

            • Assignee:
              mrawls Meredith Rawls
              Reporter:
              ebellm Eric Bellm
              Reviewers:
              Eric Bellm
              Watchers:
              Eric Bellm, Meredith Rawls
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Summary Panel