Conceptual question about extending a neural network

Currently I am extending a the Neural Network VGG16 (I implemented it in Typescript) using this code:

/* Create the Bounding Box Model, using a regression head on top of VGG16. */

// Layer 19: Flatten
const flat = tf.layers.flatten().apply(baseModel.inputs);
// Layer 20: Fully Connected Layer
const dense1 = tf.layers.dense({ units: 128, activation: "relu" }).apply(flat);
// Layer 21: Fully Connected Layer
const dense2 = tf.layers.dense({ units: 32, activation: "relu" }).apply(dense1);
// Layer 22: Sigmoid Layer (sigmoid ranges bw 0 and 1)
const outputLayer = tf.layers
  .dense({ units: 4, activation: "sigmoid" })
  .apply(dense2);

const vggBBox = tf.model({
  inputs: baseModel.inputs,
  outputs: outputLayer as tf.SymbolicTensor[],
});

However, I find this way of writing quite cumbersome. Is tf.model the only way to extend the network, or is it possible to either append/push layers to a sequential.layers array, or any other approach ?

There is a Keras-style “Layers” API for TFJS: TensorFlow.js layers API for Keras users

// JavaScript:
import * as tf from '@tensorflow/tfjs';

// Build and compile model.
const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});

// Generate some synthetic data for training.
const xs = tf.tensor2d([[1], [2], [3], [4]], [4, 1]);
const ys = tf.tensor2d([[1], [3], [5], [7]], [4, 1]);

// Train model with fit().
await model.fit(xs, ys, {epochs: 1000});

// Run inference with predict().
model.predict(tf.tensor2d([[5]], [1, 1])).print();
1 Like

Yes but the reason for the code above

const vggBBox = tf.model({
  inputs: baseModel.inputs,
  outputs: outputLayer as tf.SymbolicTensor[],
});

is that inputs is not trainable, as I set it with trainable = false; however, I do not see a way to do it in layers, would just tf.models.layers.push([prevPrevOutLay, prevOutLay, outLay]) be reasonable choice ?