Training two identical neural networks with one output

Hello, I am interested in generating and training a certain neural network structure, but am unsure how to get started.

In essence, I need to be able to call the same neural network for every instance of an object. This neural network will then have a scalar output. The issue is, I cannot train on these outputs as only the sum of the outputs is known. While I know I can concatenate these networks together, I also need to somehow tell the optimizer that although the inputs are different the weights of each instance of the neural network should be the same and cannot be changed independently. How do I do this? Is this something that needs to be specified for the Neural Networks, or for the optimizer?

Hi @Andres_Cordoba,

Here are my thoughts:

Shared-weight neural networks: All of the neurons share the same weights. This means that the weights can be updated using a single optimizer, and the outputs of the network will be the same for all instances of the object.

Please let me know if it helps you, or you can share your thoughts if you find something else.

Thanks.

1 Like

Hi @Andres_Cordoba ,

You can check out joing embedding architectures which allow you to train with two identical or near-identical architectures and can allow for the weight sharing. This means you can use one optimizer just as @Laxma_Reddy_Patlolla mentioned.

1 Like