In today’s meeting, one of our OpenMined collaborators Teo Milea gave a presentation on PySyTFF to the group. We talked about planned future mechanisms to ensure that requests submitted by a customer of PySyTFF are safe - from data owners manually auditing the requests, to policy-based mechanisms that can e.g., mandate the use of DP, bound the privacy budget, and involve static verification of submitted ML model code. We also discussed how PySyTFF exemplifies a recommended architectural pattern whereby the training process as a whole, from constructing TFF code, determining and verifying privacy budget, to running the training loop, and finally, making policy-based decisions on the release of computed artifacts, are encapsulated within the trusted platform boundary. Notes on GitHub, and follow-up discussions to continue on Discord. Thanks!