Hi I was trying to figure out the capacity of a neural network using tensorflow. The project I was using to test amount of neurons and layers is the TinyML sinewave project Intro to TinyML Part 2: Deploying a TensorFlow Lite Model to Arduino | Digi-Key Electronics - YouTube. I tried doing a 1 layer with 256 neurons, 2 layers with 128, 4x64, 8x32, 16x16, 32x8, 64x4, 128x2. I had a problem with the models with neurons lower than 16 in a layer. I also noticed that the amount of layers had to decrease in order to use less neurons per layer to get a sine wave prediction. When I did 32x8 … I ended up with a line as a prediction. I am wondering why this is happening?
The more the hidden layer the more the number of parameters to learn (it depends on what kind of layers you have used). Higher the number of parameters, higher the memory required.
Please share a standalone code for further support. Thanks!