Details
-
Type:
Story
-
Status: To Do
-
Resolution: Unresolved
-
Fix Version/s: None
-
Component/s: Stack Documentation and UX
-
Labels:
-
Epic Link:
-
Team:SQuaRE
Description
SDSS has a handy webpage with descriptions of all of their bitmask flags:
http://www.sdss.org/dr12/algorithms/bitmasks/#ListofBitmasks
It would be exceptionally useful for LSST to produce a similar webpage. I could see it being auto-built from our current flags documetation, which would also help us identify places where our current docstrings are lacking (which many of them are).
Attachments
Issue Links
- relates to
-
DM-28280 Documenting Butler DatasetTypes
- To Do
-
DM-13139 flag non-finite measurement uncertainties
- To Do
-
DM-6887 Document the semantics of measurement algorithm flags
- Won't Fix
-
DM-2297 Associate documentation with new Mask planes
- To Do
-
DM-4201 Documentation and technical debt in meas_base/PixelFlags.cc
- To Do
-
DM-9050 Add flags for sources used in astrometric and photometric calibration
- Done
- mentioned in
-
Page Loading...
It's been a while and I have some fresh perspective, especially having engineered task documentation.
It sounds like there are a now possibly two different things that we're talking about. I think the original request was to document Butler datasets, and so I'm going to stick with that scope. Documenting our databases and data products ("LSST Data Model") also needs to be done, but that's a different thing and needs a different ticket from what I can see.
For Butler datasets, I now believe that I can create canonical documentation topics in pipelines.lsst.io for each dataset. These topics will be linked to the tasks that generate and transform them. I think that from the ground-up we can document how each task modifies a table schema or modifies metadata, for example, and that information can flow into both the published documentation for a task, and also the canonical documentation for a Butler dataset.
What we mentioned last November still stands, that we can't publish a table of dataset columns that's 100% relevant to any particular pipeline. But with the system I've started to build, we can certainly give users all the tools they need to identify what columns might be part of their datasets, and expose knowledge about the task that generated those columns and what those columns mean. Again, this strategy is particular to the pipelines.lsst.io documentation and Butler datasets.