Preprocessing Layers and KerasTuner Cooperation

Hi !

I’m having troubles making the Preprocessing layers and the Keras Tuner cooperate.

I am referring to this tutorial Load CSV data | TensorFlow Core for the Preprocessing part, and to the Getting started with KerasTuner documentation for the Keras Tuner part.

Briefly, here’s the code.
He loads the data:

titanic = pd.read_csv("https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_features = titanic.copy()
titanic_labels = titanic_features.pop('survived')

He creates the symbolic tensors of the features in a dictionary

inputs = {}

for name, column in titanic_features.items():
  dtype = column.dtype
  if dtype == object:
    dtype = tf.string
  else:
    dtype = tf.float32

  inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype)

inputs

Then he applies normalization on the numerical features:

numeric_inputs = {name:input for name,input in inputs.items()
                  if input.dtype==tf.float32}

x = layers.Concatenate()(list(numeric_inputs.values()))
norm = layers.Normalization()
norm.adapt(np.array(titanic[numeric_inputs.keys()]))
all_numeric_inputs = norm(x)

all_numeric_inputs

He creates a list

preprocessed_inputs = [all_numeric_inputs]

He one hot encodes the categorical features:

for name, input in inputs.items():
  if input.dtype == tf.float32:
    continue

  lookup = layers.StringLookup(vocabulary=np.unique(titanic_features[name]))
  one_hot = layers.CategoryEncoding(num_tokens=lookup.vocabulary_size())

  x = lookup(input)
  x = one_hot(x)
  preprocessed_inputs.append(x)

and then He concatenates it:

preprocessed_inputs_cat = layers.Concatenate()(preprocessed_inputs)

titanic_preprocessing = tf.keras.Model(inputs, preprocessed_inputs_cat)

Now what I want to do is insert this preprocessing part in KerasTuner. I tried this:

def titanic_model(units, activation):

    model_inputs = tf.keras.Input(shape=28)

    dense_1 = layers.Dense(units=units, activation=activation)(model_inputs)
    dense_output = layers.Dense(1)(dense_1)
    body = tf.keras.Model(inputs = model_inputs, outputs =  dense_output)

    return body

def build_model(hp,preprocessing_head, inputs):

    units = hp.Int("units", min_value=32, max_value=512, step=32)
    activation = hp.Choice("activation", ["relu", "tanh"])

    preprocessed_inputs = preprocessing_head(inputs)
    result = titanic_model(units,
                           activation)(preprocessed_inputs)

    model = tf.keras.Model(inputs, result)

    model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                  optimizer=tf.keras.optimizers.Adam(),
                  metrics = ['accuracy'])

    return model

titanic_model = build_model(keras_tuner.HyperParameters(),titanic_preprocessing, inputs)

but it gives me the following error:

Inputs to a layer should be tensors. Got: <keras_tuner.engine.hyperparameters.HyperParameters
object at 0x7ff52844da30>

I cannot understand if I am close to the solution, or this is not the right way to proceed.

However, the workaround i found was to insert directly in the build_model function the preprocessing layer (titanic_preprocessing) and the inputs dictionary, without passing it as an argument of the function.

Hence:

def titanic_model(units, activation):

    model_inputs = tf.keras.Input(shape=28)

    dense_1 = layers.Dense(units=units, activation=activation)(model_inputs)
    dense_output = layers.Dense(1)(dense_1)
    body = tf.keras.Model(inputs = model_inputs, outputs =  dense_output)

    return body

def build_model(hp):

    units = hp.Int("units", min_value=32, max_value=512, step=32)
    activation = hp.Choice("activation", ["relu", "tanh"])

    preprocessed_inputs = titanic_preprocessing(inputs)
    result = titanic_model(units,
                           activation)(preprocessed_inputs)

    model = tf.keras.Model(inputs, result)

    model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                  optimizer=tf.keras.optimizers.Adam(),
                  metrics = ['accuracy'])

    return model

titanic_model = build_model()

In this case it seems to work, and by setting the tuner

tuner = keras_tuner.RandomSearch(
    hypermodel = build_model,
    objective=keras_tuner.Objective("accuracy", direction="max"),
    max_trials = 1,
    overwrite = True,
    directory = "tuner_dir",
    project_name = "regression_tuner")

and searching it works:

tuner.search(x=titanic_features_dict, y=titanic_labels, epochs=10)

However, I am doubting this solution and would appreciate your feedback on this.

Thank you!

Did you receive any response to this query - if not, will try it work thru it…

Arindam

Hi Arindam!

Unfortunately I still do not have the solution to the question :slightly_frowning_face:

A possible solution I had in mind is to make the preprocessing inside a function of the model.
But it does not seem convenient.

Eventually, I found the right way to put it together. I just needed to read the documentation better. lol

Here what I changed:

After the loading i split the data

titanic = pd.read_csv("https://storage.googleapis.com/tf-datasets/titanic/train.csv")
titanic_features = titanic.copy()
titanic_labels = titanic_features.pop('survived')

X_train, X_test, y_train, y_test = train_test_split(titanic_features,titanic_labels,
                                                    random_state=random_state,
                                                    test_size=0.20, 
                                                    )

Then, I create a function to create the tensor for each input feature:

def create_model_inputs():
  
  inputs = {}
  for name, column in titanic_features.items():
    dtype = column.dtype
    if dtype == object:
      dtype = tf.string
    else:
      dtype = tf.float32

    inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype)

  return inputs

You might have seen this function in other guides.

Then, I perform the preprocessing steps in this block of code:

inputs = {}
for name, column in titanic_features.items():
  dtype = column.dtype
  if dtype == object:
    dtype = tf.string
  else:
    dtype = tf.float32

  inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype)

numeric_inputs = {name:input for name,input in inputs.items()
                if input.dtype==tf.float32}

x = layers.Concatenate()(list(numeric_inputs.values()))
norm = layers.Normalization()
norm.adapt(np.array(titanic[numeric_inputs.keys()]))
all_numeric_inputs = norm(x)


preprocessed_inputs = [all_numeric_inputs]

for name, input in inputs.items():
    if input.dtype == tf.float32:
        continue

    lookup = layers.StringLookup(vocabulary=np.unique(titanic_features[name]))
    one_hot = layers.CategoryEncoding(num_tokens=lookup.vocabulary_size())

    x = lookup(input)
    x = one_hot(x)
    preprocessed_inputs.append(x)
      
preprocessed_inputs_concatenated = layers.Concatenate()(preprocessed_inputs)

titanic_preprocessing = tf.keras.Model(inputs, preprocessed_inputs_concatenated)

It is divided in two parts. The first one does the same thing as create_model_inputs(), the second part apply the normalization to the numeric features and transforms the categorical features by applying one-hot encode.

In the last row it creates the model titanic_preprocessing .

Then the part where I struggled :

def titanic_model(units, activation):
    
    inputs = create_model_inputs()
    
    concatenated_inputs = titanic_preprocessing(inputs)
    
    dense_1 = layers.Dense(units=units, activation=activation)(concatenated_inputs)
    dense_output = layers.Dense(1)(dense_1)
    
    body = tf.keras.Model(inputs = inputs, outputs =  dense_output)

    return body

Here I create the model. By combining the titanic_preprocessing part, with the structure of the model.
In this case, it is pretty simple, but nobody forbids complicating it.

Given that the whole exercise was focused on applying the keras_tuner, we need a way to keep the code separate and perform the hyper parametrization.

To do that we need to override Tuner.run_trial(), so that we can keep the code of titanic_model a part from the hyper parametrization part.

This is also shown in Keras Tuner Documentation

Before doing that we need to create the function to train the model, but as output, it will give back only the final score.

def model_to_tune(num_epochs, activation, units,
                  batch_size, learning_rate,
                  patience
                  ):
    
    dict_train = dict(X_train)
    dict_test = dict(X_test)

    train_ds = tf.data.Dataset.from_tensor_slices((dict_train, y_train))
    train_dataset = train_ds.shuffle(len(y_train)).batch(batch_size)

    test_ds = tf.data.Dataset.from_tensor_slices((dict_test, y_test))
    validation_dataset = test_ds.batch(batch_size)

    #Crea rete 
    model = titanic_model(
        units = units,
        activation = activation
        )
    
    #optimizer
    optimizer = tf.keras.optimizers.Adam(
        learning_rate=learning_rate
    )

    model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                  optimizer=tf.keras.optimizers.Adam(),
                  metrics = ['accuracy']
                  )
    
    #EarlyStopping
    early_stopping = EarlyStopping(
        monitor='val_loss',
        min_delta=1e-4,
        patience=patience,
        verbose=1,
        mode='auto',
        restore_best_weights=True)

    # Train & eval model
    model.fit(
        train_dataset,
        epochs=num_epochs,
        validation_data=validation_dataset,
        callbacks=[early_stopping],
        verbose=0
    )
    
    score = model.evaluate(validation_dataset, verbose=0)

    return score #recupero la metrica per passarla a Keras Tuner

Why this? By overriding the run_trial function of the class MyTuner, we can return the score model_to_tune and optimize it. Basically, this function can be used to tune anything.

class MyTuner(keras_tuner.RandomSearch):
    def run_trial(self, trial, **kwargs):
        hp = trial.hyperparameters
        return model_to_tune(
            num_epochs= 10,
            units = 10,
            activation= hp.Choice('activation',['gelu','relu']),
            learning_rate= hp.Float("learning_rate", 0.0035, 0.009, 0.0005),
            batch_size= hp.Choice("batch_size",[64, 128]),
            patience = 10,
        )

tuner = MyTuner(
    max_trials=1, 
    seed = 42,
    overwrite=True,
    directory="tensorflow_test",
    project_name="code_testing",
)
tuner.search()

That’s it!

Feel free to comment for any suggestions or to improve the answer!