Firstly, here’s a demo video of the original software generating midi drum patterns based on input from another midi or audio file: https://youtu.be/eYUaYzfZUCo
To further explain what is happening: A Google research team had several professional drummers come in to play on electronic drum kits that turn their performances into midi files. They then trained a neural network with Tensorflow on over 14 hours of drum midis played by professional drummers. This is what creates such profound results as seen in the above video. And I have used this myself and found that if you repeatedly give it the same input, it will give nearly the same output, only with very slight variations, as I would expect from a real drummer that is improvising.
My overall goal is to create a fork of this software that has Drumify with a model that is trained on my data-set of midi drum files. So far the furthest I’ve gotten is a MatMul error, after following one of the repo contributor’s instructions that he gave me. It was not without many other issues, like being unable to rebuild natively on Windows, and having to do it through the Windows Subsystem for Linux to get as far as I have gotten so far. I will link here to the issue that has more detailed information. This seems like an old, seemingly abandoned project. I just figured I would try this avenue for more assistance since I see a lot of potential in this project when artists can control the output of the plugins through training their models with the data of their choice.
I’m here to have a tool that is useful to other musicians to streamline their creative workflow, without stifling their style of expression. So, anyone that helps, their contribution will be forever made known to the world.
Here is the link to the issue I’ve created on the repo for more detailed information: https://github.com/magenta/magenta-studio/issues/54
Thank you for taking the time to read!