Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-6163

Some validate_drp measurements are not reproducible

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Won't Fix
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: Validation
    • Labels:
      None
    • Team:
      SQuaRE

      Description

      I ran validate_drp's examples/runCfhtQuickTest.sh several times to compare a branch to master where I expected no change. I was surprised to find a change in PA1, PA2 and PF1 (though everything else was identical) so I ran a twice more, once on the branch and once on master. Each run gave different results. For example:

      PA1 : 10.10 mmag <  5.00 mmag == False
      PF1 : 12.00 %    < 10.00 %    == False
      vs.
      PA1 : 10.35 mmag <  5.00 mmag == False
      PF1 :  4.00 %    < 10.00 %    == True
      

      for the two runs using master.

        Attachments

          Issue Links

            Activity

            Hide
            rowen Russell Owen added a comment -

            Michael Wood-Vasey requested some files from a full run. I have attached two zip archives containing the requested outputs from two different run of examples/runCfhtTest.sh >runCfhtTest.log using master. Each archive contains:

            • runCfhtTest.log
            • Cfht_output_r_PA1.json
            • Cfht_output_r_PA1.png
            • Cfht_output_r_PA2.json
            Show
            rowen Russell Owen added a comment - Michael Wood-Vasey requested some files from a full run. I have attached two zip archives containing the requested outputs from two different run of examples/runCfhtTest.sh >runCfhtTest.log using master. Each archive contains: runCfhtTest.log Cfht_output_r_PA1.json Cfht_output_r_PA1.png Cfht_output_r_PA2.json
            Hide
            wmwood-vasey Michael Wood-Vasey added a comment -

            Yes. There is some randomness in the generation of the statistics. This is largely correct behavior.

            In brief, the issue is that the repeatability KPMs are evaluated using a selection of random pairs to do the comparisons.

            It really is only an issue at low N, characteristic of the small test sample datasets.

            Show
            wmwood-vasey Michael Wood-Vasey added a comment - Yes. There is some randomness in the generation of the statistics. This is largely correct behavior. In brief, the issue is that the repeatability KPMs are evaluated using a selection of random pairs to do the comparisons. It really is only an issue at low N, characteristic of the small test sample datasets.
            Hide
            wmwood-vasey Michael Wood-Vasey added a comment -

            Randomness expected at low N due to choice of random pairs used to compare.

            Show
            wmwood-vasey Michael Wood-Vasey added a comment - Randomness expected at low N due to choice of random pairs used to compare.

              People

              Assignee:
              wmwood-vasey Michael Wood-Vasey
              Reporter:
              rowen Russell Owen
              Watchers:
              Michael Wood-Vasey, Russell Owen
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved:

                  Jenkins

                  No builds found.