-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
self.num_points suddenly decreased to 0 when training on my own dataset. #12
Comments
Can you please try using |
I have tried splatfacto-w-light, but it needs transforms.json. I have used colmap2nerf.py from nerf to generate a transforms.json. Honestly, it can run fluently without any error, but the result is a mess. I guess the coordinate system is not correct. Could you please provide a script, so that i can generate a correct transforms.json from colmap. |
|
Please try splatfacto-w-light. |
Please consider enabling these options.
|
use the following command and still a mess |
Hi! Did you manage to find a solution? I am training splatfacto-w-light on a custom dataset (with a transforms.json) as well and have similar messy outputs when rendering images... Even when I train it on the nerfstudio poster dataset with |
Hi! Could you please try turn off these loss since they are mostly designed for outdoor scene. And splatfactow relies heavily on the sfm init. So maybe try
|
Hi, unfortunately this does not fix it.. Is the sfm init the most important aspect? When I train splatfacto-w-light with random-init on brandenburg the scene seems to converge which suggests it is not as important. |
Hi, author!

Thanks for your fantastic work. I have encountered a problem when training on my own datasets. I have created tsv file for my own dataset. But the training process was terminated because self.num_points and num_splits suddenly decreased to zero(yes, I have monitered this parameter during the whole process). All datasets creashed at about 3000 iterations.
hope for your answer.
The text was updated successfully, but these errors were encountered: