Deploying Autoencoders on Microcontrollers. Problem with the REDUCE_PROD operator. Any workarounds?

Hello,

i would like to implement a simple Autoencoder on a microcontroller with the following arcitecture:

def AutoEncoder():
model = Sequential()

# Encoder ------------------------------------------------------------------

model.add(Dense(128, activation='tanh', input_shape=input_shape))
model.add(Dense(64, activation='tanh'))
model.add(Dense(32, activation='tanh'))
model.add(Dense(16, activation='tanh'))
model.add(Dense(8, activation='tanh'))
model.add(Dense(4, activation='tanh'))
model.add(Dense(2, activation='tanh'))

# Decoder ------------------------------------------------------------------

model.add(Dense(4, activation='tanh'))
model.add(Dense(8, activation='tanh'))
model.add(Dense(16, activation='tanh'))
model.add(Dense(32, activation='tanh'))
model.add(Dense(64, activation='tanh'))
model.add(Dense(128, activation='tanh'))
model.summary()
return model

When im trying to port this to an ESP32 with Tensorflow Micro i get the message that the REDUCE_PROD operator is not implemented in Micro yet.

So my question is, how can i work around this problem? I don’t want to add this whole operator to TF Micro. Help is appreciated.

1 Like

Hey, it seems like i found my mistake.

For others to not make the same mistake, here is what i did.
I made the mistake to set the input_shape of my network to be really weird. Like add an extra dimension, wich was totally stupid.
I figured i was wrong when i looked at the network in Netron. It had a really weird shape and did not look like any other network of the same arcitecture.
So i looked at my input_shape and realised my mistake. I changed the input from (1,128) to (128,)
Now it workes like a charm.

TLDR: Do your inputs correctly and check your network shape with netron to be sure.
If no one seems to have the same problem as you, your are most likely at fault.

1 Like