The request (from the Level 3 perspective) is to clarify our expectations for which LSST Python object types can be readily recreated from data retrieved from the LSST Archive, and how explicitly the stack will support this recreation. In other words, roughly, for what types do we support something like an object-relational mapping?
This is a key point in trying to understand what the Level 3 environment will be like for users.
I want to be sure that I understand any differences in what is possible (other than issues related solely to I/O performance/throughput) on resources at a DAC vs. on computers outside the LSST project-provided facilities.
For images, it's clear that a user can use the documented ability to retrieve LSST image data in FITS format to read that FITS data into an LSST DM Python image class. Ideally this would all be through the Butler, and presumably this is possible for all classes of image data, including calibration images, that are available from the Archive (per the DPDD specification of the official data products).
It is much less clear to me whether any form of object-recreation, short of a full recomputation, is envisioned for the Python objects that lie behind our catalog entries (Object, Source, etc.) - or for derived metadata. For instance, can the computed PSF model for an image be retrieved from the database and readily recreated in Python in a form satisfying our PSF API? A WCS?
If recreation is supported, what will those interfaces look like?