Low accuracy on a classification model

Short background
Hello Im currently taking a course in Tensorflow on udemy Modified by moderator
However Im not getting the same results as they get in the course I have looked over everything and even started copied from course material just to see if I get same results. But no still awful results.
The reason I post here is that I do not get any answers from the those in charge of the courss (seems like they have abandon udemy for their own acadamy.
So that is why I post here. The data is downloaded from Tensorflow:

Here is model:

from keras import layers

# Create base model
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False # freeze base model layers

# Create Functional model
inputs = layers.Input(shape=input_shape, name="input_layer")
# Note: EfficientNetBX models have rescaling built-in but if your model didn't you could have a layer like below

x = base_model(inputs, training=False) # set base_model to inference mode only
x = layers.GlobalAveragePooling2D(name="pooling_layer")(x)
x = layers.Dense(len(class_names))(x) # want one output neuron per class
#x = layers.Rescaling(1./255)(x)
# Separate activation of output layer so we can output float32 activations
outputs = layers.Activation("softmax", dtype=tf.float32, name="softmax_float32")(x)
model = tf.keras.Model(inputs, outputs)

# Compile the model
model.compile(loss="sparse_categorical_crossentropy", # Use sparse_categorical_crossentropy when labels are *not* one-hot

Fitting the model:

# Turn off all warnings except for errors

# Fit the model with callbacks
history_101_food_classes_feature_extract = model.fit(train_data,
                                                     validation_steps=int(0.15 * len(test_data)),
Epoch 1/3
2368/2368 [==============================] - 200s 76ms/step - loss: 4.7007 - accuracy: 0.0098 - val_loss: 4.6932 - val_accuracy: 0.0162
Epoch 2/3
2368/2368 [==============================] - 175s 73ms/step - loss: 4.6949 - accuracy: 0.0105 - val_loss: 4.6840 - val_accuracy: 0.0072
Epoch 3/3
2368/2368 [==============================] - 173s 72ms/step - loss: 4.6886 - accuracy: 0.0108 - val_loss: 4.6873 - val_accuracy: 0.0072

“However Im not getting the same results as they get in the course”
What are the results? Because the graphs you posted look reasonable. Training Loss goes down, Val loss goes down and picks up after a few epochs. Thats expected behaviour.

Sorry I forgot to post the results from the course:

790/790 [==============================] - 11s 14ms/step - loss: 0.9993 - accuracy: 0.7279


[0.9992507100105286, 0.7279207706451416]

As you can see their loss is 0.9 and Accuracy is 0.7

The only thing I can think of if there is some change in Tensorflows versions that could affect (I know their version is a little bit older) But that is just a wild guess based on nothing else to go on

It looks like they trained for 790 epochs and you only did 3.

No they only run 3 epochs, 790 is the total number of bateches if I understood correctly