Localised training and dynamic class targets

Tensorflow appears to be designed for ML engineering projects (end-to-end backprop training), however non-backprop learning algorithm research requires a) the ability to specify precisely which parts of a model are being trained at a given time (ie localised training), and b) the precise training objective (class targets), where the class targets may be a function of the output of another part of the network, not just some predefined training/test set. I have encountered a number of exploratory algorithms which appear impossible to implement with the current framework (without copying weights across independent graphs).