Discussions on slack raised a very valid concern about our usage of connections that may be expected to stay alive for long time (at least the duration of a PipelineTask's runQuantum), noting that SQLAlchemy's connection pooling functionality (which we currently turn off) might address this.
The solution is not as simple as having our Database class hold onto an Engine object and make a new Connection every time it is called, however - many Registry operations involve multiple calls to Database, and because these are expected to be part of the same transaction, they must all be executed by the same Connection.
One way to address this would be to split the Database interface into a pair of classes, one per-engine and one per-connection. We could probably make only the former an ABC, giving it protected virtual methods that take a Connection object as an argument. The per-connection object would be concrete and final, and just hold the connection state and public forwarders to those protected methods.
I'll have to think harder about how best to provide access to a per-connection object from the per-engine object, and which of those two should get the transaction method; I think that's the big open question.
Before we embark on this change, it might be good to try to verify (perhaps just by reading the docs) that using SQLAlchemy connection pooling actually does provide some insulation from broken connections, as opposed to just seeming like something that ought to provide some insulation. If we need to implement our own reconnection logic, I'd like to know that up front.