Simple test on GPU, max load

Guys, How can I create a high load on the GPU with a simple test? I try this, but it doesn’t work, the load is minimal.

    const model = tf.sequential( ) ;
    model.add( tf.layers.dense( { inputShape: [100], units: 100, activation : "elu" } ));
    model.compile( { optimizer: 'sgd', loss: 'meanSquaredError' } );

    let arrX = []
    let arrY = []

    let tempX = []
    let tempY = []

    let count = 1

    for (let i = 0; i < 1000; i++) {
        tempX.push( Math.random() )
        tempY.push( Math.random() )

        if( count==100 ){
            arrX.push(tempX)
            arrY.push(tempY)
            tempX = []
            tempY = []
            count = 1
        }else
            count++
    }

    // Test Tensor
    const xs = tf.tensor2d(arrX);
    const ys = tf.tensor2d(arrY)

    let counte = 0
    const config = { shuffle: false, verbose: false, epochs: 100000, callbacks:{
        onEpochEnd: async (epoch, logs)=>{
            if( counte >= 1000 ){
                console.log(logs)
                counte = 0
            }
            counte++
        },
        onTrainEnd: ()=>{
            console.log('DONE')
        }
    }}
    await model.fit(xs, ys, config);

Can anyone explain why it doesn’t work?

Screenshot 123123123 hosted at ImgBB — ImgBB
Payload 1-2% only hmmm…

Hi @gotostereo, Could you please try with the below code to increase memory usage of GPU

import tensorflow as tf
from tensorflow import keras
import numpy as np

model = keras.Sequential([
    keras.layers.Conv2D(128, (3, 3), activation='relu', input_shape=(64, 64, 3)),
    keras.layers.MaxPooling2D(2, 2),
    keras.layers.Conv2D(128, (3, 3), activation='relu'),
    keras.layers.MaxPooling2D(2, 2),
    keras.layers.Flatten(),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])

data = np.random.rand(10000, 64, 64, 3)
target = np.random.randint(10, size=(10000,))

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])


model.fit(data, target, epochs=10, batch_size=512)

By executing the above code the memory usage of gpu increases from 0 to 8.9 GB in colab

image

If you want to increase the more memory usage you try to increase the training data size, batch_size in model.fit( ). Thank you

2 Likes