Help in TensorFlow Time Series tutorial

Hello all!

I am following the tutorial about Time Series from the official documentation of TensorFlow (https://www.tensorflow.org/tutorials/structured_data/time_series), and I am facing difficulties caused by the occurrence of the error below (which also occurs in other code snippets that run later), when I apply my input data (time series), that has 1440 points, to the code, and I change the code to forecast 300 points in the future (OUT_STEPS = 300, label_width=OUT_STEPS, shift=OUT_STEPS, etc.), also adjusting the inputs to 300.

Here is one of the excerpts with which I have an error, and below, the output presented (error) from its execution:

Code executed:

history = compile_and_fit(lstm_model, wide_window)

IPython.display.clear_output()

multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val)
multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(feedback_model)

Output from execution:

/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py:915: RuntimeWarning: divide by zero encountered in log10
  numdigits = int(np.log10(self.target)) + 1
---------------------------------------------------------------------------
OverflowError                             Traceback (most recent call last)
<ipython-input-63-8a2e627c43f4> in <module>()
      2 
      3 IPython.display.clear_output()
----> 4 val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
      5 performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)

4 frames
/usr/local/lib/python3.7/dist-packages/keras/utils/generic_utils.py in update(self, current, values, finalize)
    913 
    914       if self.target is not None:
--> 915         numdigits = int(np.log10(self.target)) + 1
    916         bar = ('%' + str(numdigits) + 'd/%d [') % (current, self.target)
    917         prog = float(current) / self.target

OverflowError: cannot convert float infinity to integer

I concluded that there is some dependency between the number of data entry and the number of forecast points, but not what it would be, as if I set the number of forecast points to 300 points in the example of the TensorFlow website, with number of input like 70091 (considering df = df[5::6]), this type of error that I mentioned does not occur, but if I select only 1440 points, the same error that occurs applying my data of 1440 points also occurs. If you want you can check/edit the example code from the TensorFlow website, in which I set the input number 1440 points, and made the necessary settings to predict 300 points, here on this Google Colab.

Could you help me with this please?

Thanks in advance.

I can’t edit my post. I need to edit the first code. The right first code is the following:

history = compile_and_fit(lstm_model, wide_window)

IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)

Sorry for the error. I also posted a question about this in Stack Overflow: https://stackoverflow.com/questions/69059787/error-from-code-adapted-from-tutorial-about-time-series-forecasting-from-tensorf

Thanks.

I have the same problem and would appreciate any help.