VAE with Tensorflow probability library. Problems understanding example

Hello,

I am trying to replicate the approach to create a VAE as described here.

However, I don’t understand the following piece of code and how to translate it from images (as per example) to the data I have, which is a big csv file with lots of columns and rows.

def _preprocess(sample):
  image = tf.cast(sample['image'], tf.float32) / 255.  # Scale to unit interval.
  image = image < tf.random.uniform(tf.shape(image))   # Randomly binarize.
  return image, image

train_dataset = (datasets['train']
                 .map(_preprocess)
                 .batch(256)
                 .prefetch(tf.data.AUTOTUNE)
                 .shuffle(int(10e3)))
eval_dataset = (datasets['test']
                .map(_preprocess)
                .batch(256)
                .prefetch(tf.data.AUTOTUNE))

I don’t understand what the _preprocess function is exactly doing and why this is required for the model to work. This prevents me to adapt this solution for my csv dataset.

I hope this is the correct category to ask for help.

Hi @Exitare and welcome :wave:

This looks like an image preprocessing helper function.

This casts the tensor to dtype tf.float32 and normalizes the floating point values to the [0, 255] range.

Note: “The pixel intensities of MNIST images are almost binary.” (e.g. 255 is white, 0 is black).

Check out the 3rd example from the Intro to Autoencoders tutorial - it uses electrocadriogram data from a CSV file. Let us know if it helps.

1 Like

This helpful post by @Lance_N :

https://tensorflow-prod.ospodiscourse.com/t/performing-data-wrangling-on-tf-data-dataset/6865/3?u=8bitmp3

refers to the following tutorial:

as well as the API docs:

https://www.tensorflow.org/api_docs/python/tf/data/experimental/CsvDataset

https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset

1 Like

Thank you so much for your response and help! I will check it out the information you provided!

Again, thanks!

I always get the error :
TypeError: Target data is missing. Your model has loss: <function TPFVAE.train_model.. at 0x19a69d430>, and therefore expects target data to be passed in fit().

while training to train the VAE.

As I am trying to use unsupervised learning with a VAE there shouldnt be any targets to rely on. Appartently the function i referened first is taking care of that for the mnist dataset.

But my data isn’t image based. Its just pure csv, lots of columns and rows with numbers. Just simply said. There are no target values and the VAE should be responsible to drill the dataset down to its most important information.

My code setup is as follows right now:

 self.train_dataset = tf.data.Dataset.from_tensor_slices(self.data.X_train).batch(128)
        self.val_dataset = tf.data.Dataset.from_tensor_slices(self.data.X_val).batch(128)
        self.test_dataset = tf.data.Dataset.from_tensor_slices(self.data.X_test).batch(128)

This is the model setup as by the aforementioned tutorial.

input_dimensions = self.data.inputs_dim
        latent_space_dimensions = 5
        activation = tf.keras.layers.ReLU()

        # Prior assumption is currently a normal gaussian distribution
        prior = tfd.Independent(tfd.Normal(loc=tf.zeros(latent_space_dimensions), scale=1),
                                reinterpreted_batch_ndims=1)

        encoder_inputs = keras.Input(shape=(input_dimensions))
        h1 = layers.Dense(input_dimensions, activation=activation)(encoder_inputs)
        h2 = layers.Dense(input_dimensions / 2, activation=activation)(h1)
        h3 = layers.Dense(input_dimensions / 3, activation=activation)(h2)
        h4 = tfkl.Dense(tfpl.MultivariateNormalTriL.params_size(latent_space_dimensions),
                        activation=None)(h3)
        h5 = tfpl.MultivariateNormalTriL(
            latent_space_dimensions,
            activity_regularizer=tfpl.KLDivergenceRegularizer(prior, weight=1.0))(h4)

        self.encoder = keras.Model(encoder_inputs, h5, name="encoder")

        # Build the decoder
        decoder_inputs = keras.Input(shape=(latent_space_dimensions,))
        h1 = layers.Dense(input_dimensions / 3, activation=activation)(decoder_inputs)
        h2 = layers.Dense(input_dimensions / 2, activation=activation)(h1)

        decoder_outputs = layers.Dense(input_dimensions)(h2)
        self.decoder = keras.Model(decoder_inputs, decoder_outputs, name="decoder")

        print(self.encoder.summary())
        print(self.decoder.summary())
        self.vae = tfk.Model(inputs=self.encoder.inputs,
                             outputs=self.decoder(self.encoder.outputs[0]))

Compiling the model and fiting the model looks like this:

 negloglik = lambda x, rv_x: -rv_x.log_prob(x)

        self.vae.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-3)
                         , loss=negloglik)

        _ = self.vae.fit(self.train_dataset,
                         epochs=15,
                         validation_data=eval_dataset,
                         batch_size=32,
                         verbose=1)

However, as already mentioned, as soon as the model tries to learn it errors out complaining about missing target data. I dont know how to create that, and how to fix this issue. I dont have labels, and the implementation of a VAE without the probability library doesnt require labels either.

Since you’re working with a tf.data.Dataset, perhaps something like this could work:

your_vae_model.fit(your_train_dataset.map(lambda x: (x, x)),...)

cc @markdaoust

1 Like

Indeed that does solve it. I am going to look at this solution to understand it further.

Thank you.

However, I guess my model is not properly setup.

New error now:

        lambda x, rv_x: -rv_x.log_prob(x)

    AttributeError: 'Tensor' object has no attribute 'log_prob'

This is the loss function as defined here:

 negloglik = lambda x, rv_x: -rv_x.log_prob(x)

I am sorry for asking so many questions. Its pretty hard to find examples of VAE not using the mnist dataset and even less information on VAEs with the probability library of tensorflow.

Searching this issue, I found this.

Good advice @8bitmp3.

That lambda x: (x, x) thing is common for autoencoders in tensorflow/keras. Model.fit expects (input, target) pairs. When you’re writing an auto-encoder, the target is the input. For many styles of autoencoder the target is a slightly modified version of the input.

 negloglik = lambda x, rv_x: -rv_x.log_prob(x)
AttributeError: 'Tensor' object has no attribute 'log_prob'

Right, so in keras a loss function takes the label and output as arguments, and returns the loss value.

In the tutorial the final layer of the decoder is a: tfpl.IndependentBernoulli(input_shape, tfd.Bernoulli.logits), which returns some sort of tfp probability distribution when called.

tfp probability distributions have a .log_prob(x) method.

In your code, the last layer of the decoder is a layers.Dense(input_dimensions) that just returns a Tensor, which doesn’t have a .log_prob method.

2 Likes

Thanks for the nice explanation! :slight_smile: