-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Steps to run inference on real case #13
Comments
Hi Shivansh, This work requires per-object optimization (training) to reconstruct the object in the NeRF, and then you can render novel views or extract geometry from the field. If you'd like to reconstruct a new object using this method, you can prepare multi-view images of the object in two states by following our data format and then train the model from scratch. Hope it helps. |
Should the multi-view images share the same camera extrinsics? For example, consider a drawer in two states: open and closed. Suppose we have one image for each state. Is it necessary for the images of different states to be captured from the same camera viewpoint, or can they be taken from different positions? |
It doesn't require all the views across two states to be exactly the same. But it requires the objects in two states are aligned in the world coordinate system, which can be achieved by having one shared view. |
Got it. Thanks! |
Hi @SevenLJY
What are the step to run inference on a test object? The provided data has lots of components (images, train/val/test, camera poses, etc), but it is not clear what is necessary if we just want to do inference.
Thanks!
The text was updated successfully, but these errors were encountered: