Fly-by comment here (just some food for thought...feel free to pay it no attention!) When I added this ability (i.e. to read the parquet catalog tables) in pipe_analysis, I did maintain the ability to read in the afw SourceCatalogs. This has proven very useful for folks doing quick testing runs where the parquet table writing tasks were not run. The point may be moot, however, for a few reasons:
- it may be that it would just not be worth running faro on small test runs and/or it may be considered fair to make the parquet table creation a pre-requisite for running faro (so, just perhaps be sure to note this in your docs/tutorials)
- if, as indicated in the description (and I believe is the preferred route for production), you are converting to reading in the DPDD-ified objectTable tables – which have different column names for ~everything – as opposed to the *Coadd_obj tables (whose column names match the afw SourceCatalogs...I read in these versions in pipe_analysis for just this reason!), then it would be onerous (and potentially error-prone) to try to keep both schemas in play.
Fly-by comment here (just some food for thought...feel free to pay it no attention!) When I added this ability (i.e. to read the parquet catalog tables) in pipe_analysis, I did maintain the ability to read in the afw SourceCatalogs. This has proven very useful for folks doing quick testing runs where the parquet table writing tasks were not run. The point may be moot, however, for a few reasons: