Data augmentation on 5 dimension dataset

There is a train_ds (dataset), with 5 dimension as below:

how to apply data augmentation on this dataset?
Thank you

Applying data augmentation to a 5-dimensional dataset, especially one with the dimensions you mentioned (60, 10, 224, 224, 3), suggests that you might be dealing with a video or a sequence of images where:

•	60 could be the number of samples (e.g., videos or image sequences),
•	10 might represent the sequence length or the number of frames in each video,
•	224x224 is the resolution of each frame, and
•	3 stands for the RGB color channels.

To apply data augmentation on such data, you would typically focus on spatial (image-based) and possibly temporal (sequence-based) augmentations. TensorFlow’s tf.image API provides tools for spatial augmentations, but you might need to implement custom logic for temporal augmentations or use specialized libraries that support video data.

Spatial Augmentation

For spatial augmentation (affecting each frame’s pixels), you could apply transformations such as rotation, zoom, shift, flip, and brightness adjustment. You can use TensorFlow’s ImageDataGenerator or tf.image for such transformations, applying them frame by frame.

Example using tf.image for flipping and brightness adjustment:

import tensorflow as tf

def augment_frame(frame):
# Randomly flip the image horizontally
frame = tf.image.random_flip_left_right(frame)
# Randomly adjust brightness
frame = tf.image.random_brightness(frame, max_delta=0.1)
return frame

def augment_video(video):
# Apply the augmentation to each frame of the video
return tf.map_fn(augment_frame, video)

Assuming train_ds is your dataset

Apply the augmentation to each video in the dataset

augmented_ds = x: augment_video(x))

Temporal Augmentation

For temporal augmentation, you would modify the sequence of frames in a way that preserves the essence of the video. Common techniques include:

•	Frame skipping: Randomly skipping frames in the sequence.
•	Sequence slicing: Taking a continuous subset of frames from the original sequence.

Implementing these might require custom functions, as you’ll be manipulating the sequence dimension of your dataset.

Example of sequence slicing:

def slice_sequence(video, start_frame, num_frames):
# Ensure the slice does not go out of bounds
end_frame = tf.minimum(start_frame + num_frames, tf.shape(video)[0])
return video[start_frame:end_frame]

Apply sequence slicing to each video in the dataset

Here, 10 is an example value for the number of frames you want in each sliced sequence

augmented_ds = x: slice_sequence(x, tf.random.uniform(shape=[], minval=0, maxval=tf.shape(x)[0]-10, dtype=tf.int32), 10))

Batch Processing

If train_ds is a TensorFlow dataset, ensure that the augmentation functions are compatible with batched tensors. You might need to adjust the functions to handle batches or apply the augmentation before batching.

Custom Augmentation Libraries

For more sophisticated video data augmentation, consider using libraries that support video augmentations directly, such as albumentations for spatial augmentations or custom implementations for temporal aspects.


When applying augmentations, especially temporal ones, ensure that the transformations are meaningful for your task and do not distort the data in a way that changes its underlying information, particularly for tasks sensitive to temporal dynamics (e.g., action recognition in videos).