Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-5660

Add motivated model fits to validate_drp photometric and astrometric scatter/repeatability analysis and plots

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: Validation
    • Labels:
      None

      Description

      Implement well-motivated theoretical fits to the astrometric and photometric performance measurements based on derivations from LSST Overview paper.
      http://arxiv.org/pdf/0805.2366v4.pdf

      Photometric errors described by
      Eq. 5
      sigma_rand^2 = (0.039 - gamma) * x + gamma * x^2 [mag^2]
      where x = 10^(0.4*(m-m_5))

      Eq. 4
      sigma_1^2 = sigma_sys^2 + sigma_rand^2

      Astrometric Errors
      error = C * theta / SNR

      Based on helpful comments from Zeljko Ivezic

      I think eq. 5 from the overview paper (with gamma = 0.039 and m5 = 24.35; the former I assumed and the latter I got from the value of your analytic fit that gives err=0.2 mag) would be a much better fit than the adopted function for mag < 21 (and it is derived from first principles). Actually, if you fit for the systematic term (eq. 4) and gamma and m5, it would be a nice check whether there is any “weird” behavior in analyzed data (and you get the limiting depth, m5, even if you don’t go all the way to the faint end).

      Similarly, for the astrometric random errors, we’d expect

      error = C * theta / SNR,

      where theta is the seeing (or a fit parameter), SNR is the photometric SNR (i.e. 1/err in mag), and C ~ 1 (empirically, and 0.6 for the idealized maximum likelihood solution and gaussian seeing).

        Attachments

          Activity

          Hide
          wmwood-vasey Michael Wood-Vasey added a comment -

          Thanks for the JSON schema suggestion. Done.

          Show
          wmwood-vasey Michael Wood-Vasey added a comment - Thanks for the JSON schema suggestion. Done.
          Hide
          wmwood-vasey Michael Wood-Vasey added a comment -

          Review comments incorporated. Thanks.

          Merged to master.

          Show
          wmwood-vasey Michael Wood-Vasey added a comment - Review comments incorporated. Thanks. Merged to master.
          Hide
          jsick Jonathan Sick added a comment -

          Great. I should have asked, is the JSON schema fine for your REST ingest Angelo Fausti. I think you were going to store the JSON as blobs anyways, but having more standardized fields in the JSON may make it easier to introspect, especially if the number of fit types grows.

          Show
          jsick Jonathan Sick added a comment - Great. I should have asked, is the JSON schema fine for your REST ingest Angelo Fausti . I think you were going to store the JSON as blobs anyways, but having more standardized fields in the JSON may make it easier to introspect, especially if the number of fit types grows.
          Hide
          afausti Angelo Fausti added a comment -

          Hi Jonathan Sick it's good to have a uniform JSON schema.

          Right know the dashboard stores just scalars, which are the results of the metric measurements (see here http://sqr-009.lsst.io/en/latest/#level-0-qa). At this point there is no need to ingest the JSON as a blob, but this can be done (at least part of it e.g. x, y values) if we want to reproduce the plots in http://dmtn-008.lsst.io/en/latest/ in the dashboard (next phase?).

          In addition to the metric and its measurement, the dashboard also needs information from the Jenkins job that executed the code (like job name, build number, runtime and status) I think the Jenkins job should be responsible for grabbing the information from the JSON outputs and send a POST request to the dashboard API Jonathan Sick, Joshua Hoblitt do you agree?

          The link above has a diagram with this architecture, we can discuss more this afternoon.

          Show
          afausti Angelo Fausti added a comment - Hi Jonathan Sick it's good to have a uniform JSON schema. Right know the dashboard stores just scalars, which are the results of the metric measurements (see here http://sqr-009.lsst.io/en/latest/#level-0-qa ). At this point there is no need to ingest the JSON as a blob, but this can be done (at least part of it e.g. x, y values) if we want to reproduce the plots in http://dmtn-008.lsst.io/en/latest/ in the dashboard (next phase?). In addition to the metric and its measurement, the dashboard also needs information from the Jenkins job that executed the code (like job name, build number, runtime and status) I think the Jenkins job should be responsible for grabbing the information from the JSON outputs and send a POST request to the dashboard API Jonathan Sick , Joshua Hoblitt do you agree? The link above has a diagram with this architecture, we can discuss more this afternoon.
          Hide
          jhoblitt Joshua Hoblitt added a comment -

          Angelo Fausti I concur that jenkins should push any needed metrics to the dashboard.

          Show
          jhoblitt Joshua Hoblitt added a comment - Angelo Fausti I concur that jenkins should push any needed metrics to the dashboard.

            People

            • Assignee:
              wmwood-vasey Michael Wood-Vasey
              Reporter:
              wmwood-vasey Michael Wood-Vasey
              Reviewers:
              Jonathan Sick
              Watchers:
              Angelo Fausti, Jonathan Sick, Joshua Hoblitt, Michael Wood-Vasey
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: