Neural network has six inputs and one output, how to load image for training?

data1 = tf.data.Dataset.from_tensor_slices(image1, label1)
data2 = tf.data.Dataset.from_tensor_slices(image2, label2)
.
.
.
I want to train the network with model.fit({data1, data2, data3, data4, data5, data6}, …). How to load data1 to data6.

I believe you should use the functional API (please refer to: The Functional API  |  TensorFlow Core) since you need to feed your model with many inputs (still I can’t understand why you would like to organize your data like this. If you have n classes of your images why don’t you just create one dataset from your images with the corresponding n labels?). So, while designing your model you could define your inputs like this:

input_1 = keras.Input(shape=(image1_shape,), name='input_1')
input_2 = keras.Input(shape=(image2_shape,), name='input_2')
.
.
.
Then you could compile input features into a single tensor by concatenating them. Like this:

features = layers.Concatenate()([input_1, input2,...])

You could define the output layer for your one-output model for image classification like this:

outputs = layers.Dense(num_labels, activation='softmax')(features)

And define the model as:

model = keras.Model(inputs=[input_1, input_2, ...], outputs=outputs).

I hope it helps!

Thank you very much for your reply. Your reply is to build a multi-input model. I have built the multi-input model and now I need to load the data for training. I have six datasets, each with the same labels, and the six datasets are fed into the model through six inputs. Now when I train with model.fit(), I get the following error: Faild to find data adapter that can handle input: <class ‘set’>, <class ‘NoneType’>.

I tried to replicate the exercise with two datasets instead of six (again it is a multi-input functional model) given that the model should predict different labels for each dataset (multi-output).

The code:

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, Flatten, Dense, MaxPooling2D
from tensorflow.keras.models import Model

num_samples = 1000  # Example sample size

# Generate dummy input data
image_1 = np.random.randint(0, 256, size=(num_samples, 180, 180, 3)).astype('float32')
image_2 = np.random.randint(0, 256, size=(num_samples, 180, 180, 3)).astype('float32')

# Generate dummy target data
labels_1 = np.random.randint(0, 3, size=num_samples)
labels_2 = np.random.randint(0, 3, size=num_samples)

def create_branch(input_shape, name):
    input_layer = Input(shape=input_shape)
    x = Conv2D(32, (3, 3), activation='relu')(input_layer)
    x = MaxPooling2D(pool_size=(2, 2))(x)
    x = Conv2D(64, (3, 3), activation='relu')(x)
    x = Flatten()(x)
    x = Dense(64, activation='relu')(x)
    output = Dense(3, activation='softmax', name=name)(x)  # Name the output here
    return input_layer, output

input_shape = (180, 180, 3)
input_a, output_a = create_branch(input_shape, name='output_1')
input_b, output_b = create_branch(input_shape, name='output_2')

model = Model(inputs=[input_a, input_b], outputs=[output_a, output_b])

model.compile(optimizer='adam',
              loss={'output_1': 'sparse_categorical_crossentropy',
                    'output_2': 'sparse_categorical_crossentropy'},
              metrics={'output_1': ['accuracy'], 'output_2': ['accuracy']})

# Prepare the data as a dictionary to feed into the model
train_data = {'input_1': image_1, 'input_2': image_2}
labels = {'output_1': labels_1, 'output_2': labels_2}

# Split data for demonstration (e.g., 80% train, 20% validation)
split = int(0.8 * num_samples)
train_dataset = tf.data.Dataset.from_tensor_slices(({'input_1': image_1[:split],
                                                     'input_2': image_2[:split]},
                                                    {'output_1': labels_1[:split],
                                                     'output_2': labels_2[:split]})).batch(32)
val_dataset = tf.data.Dataset.from_tensor_slices(({'input_1': image_1[split:],
                                                   'input_2': image_2[split:]},
                                                  {'output_1': labels_1[split:],
                                                   'output_2': labels_2[split:]})).batch(32)

# Train the model
history = model.fit(train_dataset, validation_data=val_dataset, epochs=10)

Thank you for providing this exercise.

I hope this helps!