A new notebook: Structured Data Classification using TensorFlow Decision Forests

Hi everyone,

As part of Kaggle’s " Tabular Playground Series - Sep 2021", I created a notebook using TensorFlow Decission Forests.
If you’re interested, check it out please and let me know if you have any ideas on how to make it better and enhance it.

Fadi Badine


Another super cool example! Thank you @Fadi_Badine!


Very good Fadi!!! Thanks for putting together this nice sample!

1 Like

Hi Fadi,

Nice colab :slight_smile:

Since you asked about it, here is a couple of nits that might be interesting to try:

Installing TF-DF (i.e. pip install tensorflow_decision_forests) prints a lot of things. You can mask some of it as follow:
!pip install tensorflow_decision_forests -U -q

By default, Colabs are running on two small CPUs (trying running !cat /proc/cpuinfo). However, by default, TF-DF trains on 6 threads (see “num_threads” constructor argument). It would be interesting to see the speed of training with only 2 threads.

The two approaches differ in two ways: Different l1_regularization values and the replacement of missing values by the mean in the second approach. Apart from this, both approaches are equivalent and are expected to give similar results within training noise (which might already be the case 0.81345 ~= 0.81343).

You can compute the confidence bounds or a t-test to be fancy :).

For long training, it might be interesting to print the training logs (while training). This can be done as follow:

!pip install wurlitzer -U -q
from wurlitzer import sys_pipes
with sys_pipes():

Note that at some point, this will be done automatically depending on the verbose parameter.

Thanks for sharing the colab. Since you had hands-on practice with the library, do you mind me asking you about your experience? For example, did you face some hard debug errors, or did some of the behavior of the library was surprising?



Thanks Mathieu!
I will apply the changes that you proposed.
However, regarding point 2 … I did not quite understand. Do you mean, I should set num_threads = 2?

As for my experience with the library, it was smooth and easy. I did not face any unusual behaviour. The tutorials and online documentation helped me a lot
The only thing I faced and was unable to solve was when using kerastuner for searching for the ultimate hyperparameters: kernel kept on crashing (locally, colab and kaggle). It did not at the beginning but rather started happening so I was not able to figure it out cause nothing had changed. But I think this is more related to keras tuner

@Mathieu, I do not know how to calculate the confidence bounds or t-test … any hint or example if possible please?

I should set num_threads = 2?

Yes. And make sure this is faster (otherwise revert back) :slight_smile:


Good point. I see what I can do. Maybe TF-DF could catch some of these issues (maybe caused by incompatible configuration) and give informative error messages…

A nice example was created by a user.

confidence bounds or t-test

Confidence intervals (CI) on the accuracy can be computed with the Wilson Score Interval and CI on AUC can be computed with Hanley et al method. Alternatively, CI can be computed on any metrics using bootstrapping (i.e. you sample with replacement in your predictions, and estimate the CI empirically; I am a big fan of this :slight_smile: ).

Note that those CI will not contain the training noise (unless you use some form of repeated training / cross-validation).

McNemar test is probably suited for accuracy on pairwise data (which should be the case i.e. you use the same test dataset on both candidate models).


Thanks @Mathieu for your reply and sorry for my belated response.
Yes, I have seen the example. In fact, Ekaterina created this based on a question that I started :slight_smile: link

I will try to set the number of threads to 2 and will add the McNemar test.


Fadi Badine