Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate my own data "batch_size exceeded!" #16

Closed
Howardyangyixuan opened this issue Aug 19, 2021 · 4 comments
Closed

Generate my own data "batch_size exceeded!" #16

Howardyangyixuan opened this issue Aug 19, 2021 · 4 comments

Comments

@Howardyangyixuan
Copy link

Hi Chen, thank you for sharing the code.
I am trying to train the network with my own data. But when I used 2_gather_256vox_16_32_64.py to generate training data from my 256^3 voxels, I got warnings like "64-- batch_size exceeded! and 32-- batch_size exceeded!" and "2_test_hdf5.py" generate something like this
3_p64_3
It seems something is wrong but I haven't found out the problem. Could you please give me some hints or have you met this before? I'm eager to know how to fix it.

Great Thanks!

@czq142857
Copy link
Owner

czq142857 commented Aug 19, 2021

Hi,

This is because the maximum number of points are sampled for the shape but they are still unable to cover the entire shape. For example, you may need 20000 points to cover the entire shape but the upper bound set in the code is 16384, then you will get a "batch_size exceeded" warning.

You can safely ignore the warnings if only a small portion of the shapes have such warnings.

If you want to change the upper bounds, find the following lines of code at each script and change batch_size_2 and batch_size_3 to some larger numbers. (batch_size_1 is for 16^3 voxels and 16*16*16 is already the maximum possible amount.)

batch_size_1 = 16*16*16
batch_size_2 = 16*16*16
batch_size_3 = 16*16*16*4

@Howardyangyixuan
Copy link
Author

Thank you very much for your quick and detailed reply! I got it at once.
So for the best performance of IM-Net and also for BSP-Net should I use the maximum possible amount to prevent "batch_size exceeded" regardless of the time of trainig?
And have you ever test how the growing batch_size will slow down the training of the network?

@czq142857
Copy link
Owner

I have never tested other options so I am not sure how they can influence the training.

@Howardyangyixuan
Copy link
Author

Then I'll try it by myself. Really appreciate your fast response. Great thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants