Implementation of paper D-Nerf: Neural Randiance Fields for Dynamic Scenes
Nerf (v18) | T-Nerf (v2) |
---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Camera | Time | Camera + Time |
---|---|---|
![]() |
![]() |
![]() |
Parameters | Values |
---|---|
Iteration | 200K |
Scheduler | Exponential Decay |
Scheduler Step | 160K approx. |
Rays Sample | 1024 |
Crop | 0.5 |
Pre Crop Iter | 50 |
Factor | 2 |
Near Plane | 2.0 |
Far Plane | 6.0 |
Height | 800 / factor |
Width | 800 / factor |
Downscale | 2 |
lr | 5e-4 |
lrsch_gamma | 0.1 |
Pos Enc Dim | 10 |
Dir Enc Dim | 4 |
Num Samples | 64 |
Num Samples Fine | 128 |
Net Dim | 256 |
Net Depth | 8 |
Inp Feat | 2*(num_channels*pos_enc_dim) + num_channels |
Dir Feat | 2*(num_channels*dir_enc_dim) + num_channels |
Time Feat | 2*(1*dir_enc_dim) + 1 |
[1] Computer Graphics and Deep Learning with NeRF using TensorFlow and Keras []
[2] 3D volumetric rendering with NeRF []
[3] Nerf Official Colab Notebook []