-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash on high density collections #39
Comments
Wrench is using an |
It's very hard to know what might be going on without the full command and perhaps the input file. Also, an assertion suggests you're running in debug mode. Can you provide information about the version or PDAL/wrench that you're using and how it was built, if you know? |
Also, things can get pretty ugly if you don't provide bounds for the output. I'm not sure how wrench decides this. Without this, memory often has to be copied over and over again. This is slow. Having some idea about the scale of your input and output is very helpful to analyzing what might be going on. |
114360 * 168446 = 19,263,484,560. That's 19GB. Then each cell is 4 bytes, so you're up to almost 80GB. Also, if PDAL doesn't know the size of the output buffer at start time, it will have to move data around. This means allocating for both source and destination, so you could think that maybe you need 160GB to do this. It seems likely you're running out of memory. It's a very large problem and you should probably break it into pieces. |
Like I mentioned above, I had 25GB of RAM still available when Wrench crashed (I run a RAM usage monitor that polls every 2 seconds). I suppose it is possible that Wrench memory usage spiked and then crashed within that 2 second window, but I do not know enough about the internal workings of the code to know how likely that is. Is it possible to specify the size of the output buffer at start time? If so, how? Either way, I agree, breaking up the processing into batches is probably going to be my best path forward. |
If you need a buffer of 50GB, a single allocation may fail if you only have 25GB. I don't know how you specify the size using wrench. |
I don't think there is any way to know what's going on without the data. If you would like to provide the data I will see what I can learn. |
I did some more investigation and found that the original LAZ file from our vendor for that tile worked fine. I just converted that original to COPC again (with PDAL) and that version worked too. I am guessing that the earlier PDAL conversion I did to COPC awhile back must have corrupted that one tile, but in such a way that lidR did not mind, so I did not catch the corruption until now. I am going to close this issue and open a new one just for the original int 16 vs int 32 issue I ran into. |
I am using wrench to create a density raster for a very high-density LiDAR collection using
pdal_wrench density
. The collection I am working with reaches densities of 200 points/meter^2 in some places. (I know this because I used lidR to create a density raster previously, but we are in the process of converting parts of our workflow to use PDAL.) I wanted to create a 20-meter raster, however, I kept getting the following error:After some troubleshooting and looking at the point clouds in QGIS, I finally realized that GDAL/PDAL was crashing once it reached the higher density areas of the collection. Assuming the worst case for a single pixel (a value of 200) and then multiplying this by 20x20 gives us 80,000. This is well above 32,767 (the maximum value of a short integer, which is what GDAL is using here to store raster values).
For now, I worked around this by creating a higher resolution raster (using 1-meter pixels instead of 20-) and will down-sample later.
It would be nice if PDAL Wrench could handle creating coarse resolution density rasters from these kinds of high-resolution LiDAR collections (perhaps by providing the option to use higher bit values for rasters), or at least provide a clearer error message when the value is too large to be saved.
The text was updated successfully, but these errors were encountered: