ML Reproducibility Challenge

Hey all,

Just wondering if anyone has any interest in joining up for the reproducibility challenge this year?

The primary goal of this event is to encourage the publishing and sharing of scientific results that are reliable and reproducible. In support of this, the objective of this challenge is to investigate reproducibility of papers accepted for publication at top conferences by inviting members of the community at large to select a paper, and verify the empirical results and claims in the paper by reproducing the computational experiments, either via a new implementation or using code/data or other information provided by the authors.

The submission deadline is July 15th, 2021, so we don’t have a lot of time but with some effort, we can pull it off. Could be fun to tackle a paper that has shown promise and would be a useful addition to Tensorflow.

Comment below if you think you’ll have a bit of time to spare and what paper you think could be worth reproducing


This is a nice idea!
Unfortunately I can’t participate at the moment :frowning:

I’d go even further and publish the model on TFHub when ready!