Premade Estimator in Tensorflow 2 (How to build a function)

I am following this tutorial:

and replicate it in Google Colab and TensorFlow 2 (see a copy of my code you can edit below, the DNN starts in cell 12, I excluded an other model and other stuff not needed. Just run everything):

(All 3 datasets should be uploaded automatically from my GitHub:
GitHub - timmy-ops/DNNs_for_CLVs: Datasets and Notebooks for the project.)

I also made the model work, but it was quite bad. While trying to improve it and build in the input pipeline of the tutorial:

def parse_csv(csv_row):
  """Parse CSV data row. takes a function as an input so need to call parse_fn
  using map(lamba x: parse_fn(x)) or do def parse_fn and return the function
  as we do here.
  Builds a pair (feature dictionary, label) tensor for each example.
    csv_row: one example as a csv row coming from the
    features and targets
  columns = tf.decode_csv(csv_row, record_defaults=clvf.get_all_defaults())
  features = dict(zip(clvf.get_all_names(), columns))

  # Remove the headers that we don't use
  for column_name in clvf.get_unused():

  target = features.pop(clvf.get_target_name())

  return features, target

def dataset_input_fn(data_folder, prefix=None, mode=None, params=None, count=None):
  """Creates a dataset reading example from filenames.
    data_folder: Location of the files finishing with a '/'
    prefix: Start of the file names
    mode: tf.estimator.ModeKeys(TRAIN, EVAL)
    params: hyperparameters
    features and targets
  shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False

  # Read CSV files into a Dataset
  filenames = tf.matching_files('{}{}*.csv'.format(data_folder, prefix))
  dataset =

  # Parse the record into tensors.
  dataset =

  # Shuffle the dataset
  if shuffle:
    dataset = dataset.shuffle(buffer_size=params.buffer_size)

  # Repeat the input indefinitely if count is None
  dataset = dataset.repeat(count=count)

  # Generate batches
  dataset = dataset.batch(params.batch_size)

  # Create a one-shot iterator
  iterator = dataset.make_one_shot_iterator()

  # Get batch X and y
  features, target = iterator.get_next()

  return features, target

def read_train(data_folder, params):
  """Returns a shuffled dataset for training."""
  return dataset_input_fn(

def read_eval(data_folder, params):
  """Returns a dataset for evaluation."""
  return dataset_input_fn(data_folder=data_folder,

def read_test(data_folder, params):
  """Returns a dataset for test."""
  return dataset_input_fn(data_folder=data_folder,

the following Error occurs:

RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
   1523       graph = get_default_graph()
   1524       if not graph.building_function:
-> 1525         raise RuntimeError("Attempting to capture an EagerTensor without "
   1526                            "building a function.")
   1527       return graph.capture(value, name=name)

RuntimeError: Attempting to capture an EagerTensor without building a function.

I already tried to enable eager execution or switch to TF1, but it did not work. What am I doing wrong?

Do I have to do anything with @tf.function? How ould that work?