# Update faro to use parquet tables for patch and tract-level metric calculation

XMLWordPrintable

#### Details

• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
• Team:
DM Science
• Urgent?:
No

#### Description

Use the objectTable parquet files instead of "deepCoadd_forced_src" FITS files to calculate metrics in faro on tract and patch scales.

#### Activity

Hide
Lauren MacArthur added a comment -

Fly-by comment here (just some food for thought...feel free to pay it no attention!) When I added this ability (i.e. to read the parquet catalog tables) in pipe_analysis, I did maintain the ability to read in the afw SourceCatalogs. This has proven very useful for folks doing quick testing runs where the parquet table writing tasks were not run. The point may be moot, however, for a few reasons:

• it may be that it would just not be worth running faro on small test runs and/or it may be considered fair to make the parquet table creation a pre-requisite for running faro (so, just perhaps be sure to note this in your docs/tutorials)
• if, as indicated in the description (and I believe is the preferred route for production), you are converting to reading in the DPDD-ified objectTable tables – which have different column names for ~everything – as opposed to the *Coadd_obj tables (whose column names match the afw SourceCatalogs...I read in these versions in pipe_analysis for just this reason!), then it would be onerous (and potentially error-prone) to try to keep both schemas in play.
Show
Lauren MacArthur added a comment - Fly-by comment here (just some food for thought...feel free to pay it no attention!) When I added this ability (i.e. to read the parquet catalog tables) in pipe_analysis, I did maintain the ability to read in the afw SourceCatalogs. This has proven very useful for folks doing quick testing runs where the parquet table writing tasks were not run. The point may be moot, however, for a few reasons: it may be that it would just not be worth running  faro  on small test runs and/or it may be considered fair to make the parquet table creation a pre-requisite for running faro (so, just perhaps be sure to note this in your docs/tutorials) if, as indicated in the description (and I believe is the preferred route for production), you are converting to reading in the DPDD-ified  objectTable  tables – which have different column names for ~everything – as opposed to the *Coadd_obj tables (whose column names match the afw SourceCatalogs...I read in these versions in pipe_analysis for just this reason!), then it would be onerous (and potentially error-prone) to try to keep both schemas in play.

#### People

Assignee:
Erik Dennihy
Reporter:
Jeffrey Carlin
Watchers:
Brock Brendal [X] (Inactive), Jeffrey Carlin, Lauren MacArthur, Lee Kelvin