You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for publishing this repository, your work is very insightful.
Regarding Table 1. in the paper about validation set comparisons on SemanticKITTI, may I ask how you obtained the mIOU results of the other methods? I know that you evaluate on your own dataset created by create_fov_dataset.py , but do you:
Use the published pretrained checkpoint from those methods (e.g. Salsanext, Cylinder3D, etc.) to evaluate directly even though most of them are trained on the entire point cloud (360 degree fov). Or,
Use the published model architecture and training settings of those methods and train yourselves using the limited fov dataset
If none of the above is correct, it would be much appreciated if you give me some insights on how you conducted the comparison. Thanks again for your great work!
The text was updated successfully, but these errors were encountered:
Hello, thank you for publishing this repository, your work is very insightful.
Regarding Table 1. in the paper about validation set comparisons on SemanticKITTI, may I ask how you obtained the mIOU results of the other methods? I know that you evaluate on your own dataset created by
create_fov_dataset.py
, but do you:If none of the above is correct, it would be much appreciated if you give me some insights on how you conducted the comparison. Thanks again for your great work!
The text was updated successfully, but these errors were encountered: