Proximal-gradient type optimization for training

I noticed that in Keras we can specify a constraint for model parameters (see the constraint argument of the Dense layer, for example). And the protocol the constraint object seems to implement is a Euclidean projection onto the constraint set.

Is it possible to use the same framework to implement a proximal-gradient type algorithm for training by implementing the regularizer’s proximal operator as a “constraint”?