Creating dataset to pan a sub-image through an image

Hello,

  I'm seeking to run testing/training on significant portions of large images, and I want to scroll/pan these portions around systematically within the larger image.  I have the image in memory as a numpy of pixels.

Below I show simplified code that maps out three approaches to creating a dataset that might be used with, for instance, keras model.predict()…

None of these approaches work well. Is there an elegant, or at least efficient solution? Or a good way to parallelize the effort, perhaps to scroll/pan though a pipeline of large images?

“3D Array Method”: What I’m doing now:

1: I’m making a 3D numpy the shape of (number-of subimages, sub-imagewidth, subimageheight)
2. I’m redundantly copying the data from each sub-image into the 3D-array.
3. Using tf.data.Dataset.from_tensor_slices(3Dnumpy) to make my pipeline.

This works, but uses huge amounts of memory, takes too much time, and just seems brute force. In my example, assembling the data array takes 2.9 seconds and converting it to a dataset takes 5.2 seconds.

Numpy list method:

Numpys are great at non-copy ‘extraction’ of sub-arrays from arrays! So in this method, instead of a 3D array I use a list of numpys, each of which merely references the appropriate portion of the large image. Indeed, the time to assemble the list of inputs drops a negligible 0.002 seconds. However, building the dataset rises to 85 seconds! Something is taking too long.

Copied numpy list method:

One thing that occurred to me is that perhaps tensorflow has trouble de-tangling the ‘magic’ of numpys. Therefore, I tried making each list entry be it’s own copy of the data. As would be expected, the list assembly time rebounded to 1.5 seconds, but the dataset build time was roughly the same at 86 seconds. Are list of large arrays just problematic for tf.data.Dataset.from_tensor_slice?

Code and run-output are below.

Code:


import numpy as np
import tensorflow as tf
import time

image = np.zeros((2000,1000))

print(“3D array method: \n”)

subimagelist = np.zeros((1000,1000,1000))

print(“assembling list”)

t1 = time.perf_counter()
for i in range(1000):
subimagelist[i,:,:] = image[i:i+1000,:]

print(“building dataset”)
t2 = time.perf_counter()

ds = tf.data.Dataset.from_tensor_slices(subimagelist)

t3= time.perf_counter()

print("Assembly time = ", t2-t1, " Build time = ", t3-t2)

print("\nNumpy list method:\n")

subimagelist = []

print(“assembling list”)

t1 = time.perf_counter()
for i in range(1000):
subimagelist.append(image[i:i+1000,:])

print(“building dataset”)
t2 = time.perf_counter()

ds =

t3= time.perf_counter()

print("Assembly time = ", t2-t1, " Build time = ", t3-t2)

print("\nCopied nNumpy list method:\n")

subimagelist = []

print(“assembling list”)

t1 = time.perf_counter()
for i in range(1000):
subimagelist.append(image[i:i+1000,:].copy())

print(“building dataset”)
t2 = time.perf_counter()

ds = tf.data.Dataset.from_tensor_slices(subimagelist)

t3= time.perf_counter()

print("Assembly time = ", t2-t1, " Build time = ", t3-t2)

Output:


3D array method:

assembling list
building dataset
Assembly time = 1.262837625999964 Build time = 5.160149674000422

Numpy list method:

assembling list
building dataset
Assembly time = 0.0017657110001891851 Build time = 85.18646240599992

Copied nNumpy list method:

assembling list
building dataset
Assembly time = 1.6021064809974632 Build time = 86.4543162220034