Minimal implementation of NeRF

Our new Keras example just got published in Keras.

Link: 3D volumetric rendering with NeRF

In this joint venture with Ritwik Raha, we present a minimal implementation of the research paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et. al. The authors have proposed an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network. We try making concepts like 3D volumetric rendering and ray tracing as visual as possible.

We would like to thank @Sayak_Paul for his thorough review of the first draft. We also want to acknowledge @fchollet for his guidance through and through.

The GIFs in the example are made with manim. We like how the animations have turned out to be.

1 Like

This looks very interesting.

Sorry for my ignorance but after you train the model, can you apply to new data easily?

1 Like

Thanks for your interest @lgusm

To answer your concern, the model trained here is very specific to the scene that we want to synthesize. You can think of the model to encode a specific scene in itself, it cannot help you with the generation of a completely new scene.

1 Like