As I just did with DM-28280, I'm linking DM-37544 and DM-33034 as being relevant for how I see this happening. I won't repeat everything I said there, but basically I see us declaring important catalog dataset types in pipelines, then looking at the schema files for those catalogs to find flags when generating pipelines.lsst.io docs for packages with "leaf" pipelines like drp_pipe. That will reveal (as John Swinbank said ages ago) that a lot of the docs for those flags are not very good, but that's a separate problem; we at least already have places to put that documentation.
It's worth noting, however, that we do not have a way to propagate schema information (let alone docs) from afw.table schema datasets to parquet. I think we could now save schema information for parquet files as initOutputs, thanks to Eli Rykoff's work on parquet formatters. But we need to come up with conventions for doing so, and figure out the relationship between that task-written schema information and the stuff in sdm_schemas. It may be that it will work better to write docs for standardized schemas directly in sdm_schemas YAML files, but I worry that that's too "far away" from the code that sets the flags to be maintainable.