# biasCorr calculation code results disagrees when using main task code

XMLWordPrintable

#### Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
5
• Team:
Data Release Production

#### Description

 import lsst.cp.pipe.makeBrighterFatterKernel as kernelGen biases, means, xcorrs = kernelGen.calcBiasCorr([70000, 90000, 110000], (2000,1000), useTaskCode=False, nSigma=5, repeats=3)

and

 biases2, means2, xcorrs2 = kernelGen.calcBiasCorr([70000, 90000, 110000], (2000,1000), useTaskCode=True, nSigma=5, repeats=3)

differ, and they shouldn't. Investigate and fix.

Furthermore, as per what was originally DM-15401: something seems to not be quite right in the calculation of cross-correlations in MakeBrighterFatterKernelTask, as the sum of the cross-correlations are coming out much higher than they should be (I think, based on the code it was ported from at least).

Find out why, and once that's done, put the config value xcorrCheckRejectLevel back to the nominal value of 0.2 (or was it 0.1, check Will's code), if appropriate.

Given that these may be related, these are being combined.

#### Activity

Hide
Merlin Fisher-Levine added a comment -

How different: really quite different. For nsigma=5 (the nominal setting) the task code calculates effectively no bias (so a value of 1), whereas Will gets 0.9241.

To reproduce:

 import lsst.cp.pipe.makeBrighterFatterKernel as makeBf makeBf.calcBiasCorr([50000,100000],(1000,2000), useTaskCode=True)

will give very different bias results to

 makeBf.calcBiasCorr([50000,100000],(1000,2000), useTaskCode=False

If you change the value of the number of sigma clipped to they become more similar but don't converge (and we'd expect them to always be the same anyway, so convergence isn't really that interesting, just mentioning it as debug info, and because this highlights the fact that the task code doesn't always just put out something consistent with 1 within errors).

The useTaskCode=False configuration reproduces the result from Will, which was appropriate for HSC use.

Show
Merlin Fisher-Levine added a comment - How different: really quite different. For nsigma=5 (the nominal setting) the task code calculates effectively no bias (so a value of 1), whereas Will gets 0.9241. To reproduce: import lsst.cp.pipe.makeBrighterFatterKernel as makeBf makeBf.calcBiasCorr([50000,100000],(1000,2000), useTaskCode=True) will give very different bias results to makeBf.calcBiasCorr([50000,100000],(1000,2000), useTaskCode=False If you change the value of the number of sigma clipped to they become more similar but don't converge (and we'd expect them to always be the same anyway, so convergence isn't really that interesting, just mentioning it as debug info, and because this highlights the fact that the task code doesn't always just put out something consistent with 1 within errors).   The useTaskCode=False configuration reproduces the result from Will, which was appropriate for HSC use.
Hide
Merlin Fisher-Levine added a comment -

Doing

 import lsst.cp.pipe.makeBrighterFatterKernel as makeBf   discrepancy = {}   for nSig in range(1,6):  print("nSig {} Task:".format(nSig))  task = makeBf.calcBiasCorr([50000],(1000,2000), nSigmaClip=nSig, useTaskCode=True)    print("nSig {} custom code:".format(nSig))  nonTask = makeBf.calcBiasCorr([50000],(1000,2000), nSigmaClip=nSig, useTaskCode=False)  print()    discrepancy[nSig] = nonTask[0][50000][0] / task[0][50000][0]    print(discrepancy) 

gives

 {1: 0.9241000000048462,  2: 0.9241000000048742,  3: 0.9240999999990274,  4: 0.924099999995785,  5: 0.9241000000040692} 

showing that everything is consistent with being divided by the nominal biasCorr.

Having now patched the code to use the task code, and not doing that division when the function is being used to calculate the parameter we get (after commit ec27fac6feed):

 {2: 0.9999999999975324,  3: 0.9999999999998973,  4: 0.9999999999998599,  5: 1.0000000000006617} 

so things are now good to arbitrary precision

All thanks go to Robert for the second set of eyes here.

Show
Merlin Fisher-Levine added a comment - Doing import lsst.cp.pipe.makeBrighterFatterKernel as makeBf   discrepancy = {}   for nSig in range(1,6): print("nSig {} Task:".format(nSig)) task = makeBf.calcBiasCorr([50000],(1000,2000), nSigmaClip=nSig, useTaskCode=True)   print("nSig {} custom code:".format(nSig)) nonTask = makeBf.calcBiasCorr([50000],(1000,2000), nSigmaClip=nSig, useTaskCode=False) print() discrepancy[nSig] = nonTask[0][50000][0] / task[0][50000][0]   print(discrepancy) gives {1: 0.9241000000048462, 2: 0.9241000000048742, 3: 0.9240999999990274, 4: 0.924099999995785, 5: 0.9241000000040692} showing that everything is consistent with being divided by the nominal biasCorr. Having now patched the code to use the task code, and not doing that division when the function is being used to calculate the parameter we get (after commit ec27fac6feed ): {2: 0.9999999999975324, 3: 0.9999999999998973, 4: 0.9999999999998599, 5: 1.0000000000006617} so things are now good to arbitrary precision All thanks go to Robert for the second set of eyes here.
Hide
Merlin Fisher-Levine added a comment -

Further investigation on the second part of this ticket regarding the default value for xcorrCheckRejectLevel shows that it should in fact just be >>1. How much larger is not clear (it's a tunable parameter after all), but the value from the original code was never appropriate/used as it simply rejects all data.

Setting the default value to 2 as this will effectively include all data, and if users in future find that they want to use this parameter to cut bad data they can tune the parameter as they see fit.

Show
Merlin Fisher-Levine added a comment - Further investigation on the second part of this ticket regarding the default value for xcorrCheckRejectLevel shows that it should in fact just be >>1. How much larger is not clear (it's a tunable parameter after all), but the value from the original code was never appropriate/used as it simply rejects all data. Setting the default value to 2 as this will effectively include all data, and if users in future find that they want to use this parameter to cut bad data they can tune the parameter as they see fit.
Hide
Paul Price added a comment -

Looks good!

Show
Paul Price added a comment - Looks good!
Hide
Merlin Fisher-Levine added a comment -

Merged.

Show
Merlin Fisher-Levine added a comment - Merged.

#### People

Assignee:
Merlin Fisher-Levine
Reporter:
Merlin Fisher-Levine
Reviewers:
Paul Price
Watchers:
John Swinbank, Merlin Fisher-Levine, Paul Price, Robert Lupton