How to improve accuracy of a NN with multi-label text classification

I am new to neural networks and Tensorflow, and I am trying to train a model to classify strings. I am using this subset of reviews from Yelp. Each review can be ‘Useful,’ ‘Funny,’ or ‘Cool.’

The preprocessing steps described in this TensorFlow documentation was taken:

  • Removed punctuation
  • Removed stop-words
  • Transformed words into tokens
  • Turned each review into sequences of tokens
  • Padded the strings to ensure everything has the same size
  • Transformed the categories into one-hot

Here is one example after the transformation:

before:

review: “difference red light clothing exchange goodwil…”
classification: funny

after:

review: [0, 1179, 335, 381, 1877, 2369, 5299, 1888, 67, 5245, 11, 1716, 6346, 4547, 1877, 161, 821, 1688, 4625, 4625, 14853, 13775, 2285, 295, 127, 2285, 123, 295, 98, 12, 5741, 2791, 5031, 4772, 1307, 2285, 1130, 368, 15, 1589, 7274, 90, 309, 385, 279, 1524, 7275, 12161, 179, 52, 73, 230, 39, 5741, 2380, 105, 1019, 551, 335, 381]
classification: [1, 0, 0]

I tried different combinations of activations, dropouts rates, number of neurons, L2 rates, and different optimizers, but the best accuracy I got with 10,000 observations and 50 epochs was 63.89%

I don’t know what else I could try. Does anyone here have any ideas?

Here is the jupyter notebook

Here is the model settings:

    # create the model
    # expected execution time: 1m
    numberOfCategories = len(categoriesDict)
    vocabularySize = len(word_index)
    
    model = tf.keras.Sequential(
        [   
            # avgSize => the lenght of each review padded: 60
            # vocabularySize => the number of different words identified: 418676
            # output_dim => the number of categories (dimensions): 3
            tf.keras.layers.Embedding(input_dim = vocabularySize, output_dim = 3, input_length = avgSize),
            tf.keras.layers.Dropout(0.5),
            tf.keras.layers.GlobalAveragePooling1D(), 
            tf.keras.layers.Dropout(0.5),
            tf.keras.layers.Dense(30, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(l2=0.01)), 
            tf.keras.layers.Dropout(0.5), 
            tf.keras.layers.Dense(30, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(l2=0.01)), 
            tf.keras.layers.Dropout(0.5), 
            tf.keras.layers.Dense(3, activation='softmax')
            
        ]
    )
    model.compile(
        loss = 'categorical_crossentropy',
        optimizer = 'Adam',
        metrics = ['accuracy']
    )

Hi @AndrewFerreira, Try by performing the hyper parameter tuning by increasing or decreasing the learning rate, batch size and adding regularization layers to your model. Thank You.